Hostname: page-component-848d4c4894-2pzkn Total loading time: 0 Render date: 2024-04-30T15:14:43.333Z Has data issue: false hasContentIssue false

Steady-state solutions for a reaction–diffusion equation with Robin boundary conditions: Application to the control of dengue vectors

Published online by Cambridge University Press:  18 September 2023

Luis Almeida
Affiliation:
Laboratory Jacques-Louis Lions UMR7598, Sorbonne University, CNRS, Paris, 75005, France
Pierre-Alexandre Bliman
Affiliation:
INRIA, Laboratory Jacques-Louis Lions UMR7598, Sorbonne University, CNRS, Paris, 75005, France
Nga Nguyen*
Affiliation:
INRIA, Laboratory Jacques-Louis Lions UMR7598, Sorbonne University, CNRS, Paris, 75005, France LAGA, CNRS UMR 7539, Institut Galilee, University Sorbonne Paris Nord, Villetaneuse, 93430, France
Nicolas Vauchelet
Affiliation:
LAGA, CNRS UMR 7539, Institut Galilee, University Sorbonne Paris Nord, Villetaneuse, 93430, France
*
Corresponding author: Nga Nguyen; Email: thiquynhnga.nguyen@math.univ-paris13.fr
Rights & Permissions [Opens in a new window]

Abstract

In this paper, we investigate an initial-boundary value problem of a reaction–diffusion equation in a bounded domain with a Robin boundary condition and introduce some particular parameters to consider the non-zero flux on the boundary. This problem arises in the study of mosquito populations under the intervention of the population replacement method, where the boundary condition takes into account the inflow and outflow of individuals through the boundary. Using phase plane analysis, the present paper studies the existence and properties of non-constant steady-state solutions depending on several parameters. Then, we prove some sufficient conditions for their stability. We show that the long-time efficiency of this control method depends strongly on the size of the treated zone and the migration rate. To illustrate these theoretical results, we provide some numerical simulations in the framework of mosquito population control.

Type
Papers
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press

1. Introduction

The study of scalar reaction–diffusion equations $\partial _t p - \Delta p = f(p)$ with a given non-linearity $f$ has a long history. For suitable choices of $f$ , this equation can be used to model some phenomena in biology such as population dynamics (see e.g. [Reference Fife4, Reference Murray16, Reference Smoller and Wasserman25]). To investigate the structure of the steady-state solutions, the semilinear elliptic equation $\Delta p + f(p) = 0$ has been studied extensively.

Many results about the multiplicity of positive solutions for the parametrised version $\Delta p + \lambda f(p) = 0$ in a bounded domain are known. Here, $\lambda$ is a positive parameter. Various works investigated the number of solutions and the global bifurcation diagrams of this equation according to different classes of the non-linearity $f$ and boundary conditions. For Dirichlet problems, in [Reference Lions15], Lions used many ‘bifurcation diagrams’ to describe the solution set of this equation with several kinds of non-linearities $f$ and gave nearly optimal multiplicity results in each case. The exact number of solutions and the precise bifurcation diagrams with cubic-like non-linearities $f$ were given in the works of Korman et al. [Reference Korman, Li and Ouyang13, Reference Korman, Li and Ouyang14], Ouyang and Shi [Reference Ouyang and Shi18], and references therein. In these works, the authors developed a global bifurcation approach to obtain the exact multiplicity of positive solutions. In the case of one-dimensional space with a two-point boundary, Korman gave a survey of this approach in [Reference Korman12]. Another approach was given by Smoller and Wasserman in [Reference Smoller24] using phase plane analysis and the time mapping method. This method was completed and applied in the works of Wang [Reference Wang28, Reference Wang and Kazarinoff29]. While the bifurcation approach is convenient to solve the problem with more general cubic non-linearities $f$ , the phase plane method is more intuitive and easier to compute.

Although many results were obtained concerning the number of solutions for Dirichlet problems, relatively little seems to be known concerning the results for other kinds of boundary conditions. For the Neumann problem, the works of Smoller and Wasserman [Reference Smoller24], Schaaf [Reference Schaaf21], and Korman [Reference Korman11] dealt with cubic-like non-linearities $f$ in one dimension. Recently, more works have been done for Robin boundary conditions (see e.g. [Reference Daners3, Reference Shi and Li22, Reference Zhang, Li and Xue33]), Neumann–Robin boundary conditions (see e.g. [Reference Tsai, Wang and Huang27]), or even non-linear boundary conditions (see e.g. [Reference Goddard and Shivaji6, Reference Gordon, Ko and Shivaji7] and references therein). However, those works only focused on other types of non-linearities such as positive or monotone $f$ . An analogous problem with advection term was studied in [Reference Wang, Shi and Wang31, Reference Wang and Shi30] for cubic-like non-linearities, but in these works, they used a homogeneous non-symmetric Robin boundary condition to characterise the open or closed environment boundary. To the best of our knowledge, the study of inhomogeneous symmetric Robin problems with cubic-like non-linearities remains quite open.

In this paper, we study the steady-state solutions with values in $[0,1]$ of a reaction–diffusion equation in one dimension with inhomogeneous Robin boundary conditions:

(1.1a) \begin{align} \partial _t p^0 - \partial _{xx} p^0 = f(p^0), & \hspace{0.5 cm} & (t,x) \in (0,T) \times \Omega, \end{align}
(1.1b) \begin{align} \frac{\partial p^0}{\partial \nu } = -D(p^0 - p^{\mathrm{ext}}), & \hspace{0.5 cm}& (t,x) \in (0,T) \times \partial \Omega, \end{align}
(1.1c) \begin{align} p^0(0,x) = p^{\mathrm{init}}(x), &\hspace{0.5 cm} & x \in \Omega, \end{align}

where $\Omega = (\!-\!L,L)$ is a bounded domain in $\mathbb{R}$ , time $T \gt 0$ . The steady-state solutions satisfy the following elliptic boundary value problem:

(1.2a) \begin{align} -p''(x) = f(p(x)), & \hspace{0.5 cm} & x \in (\!-\!L,L), \end{align}
(1.2b) \begin{align} p'(L) = -D(p(L) - p^{\mathrm{ext}}), & \hspace{0.5 cm}& \end{align}
(1.2c) \begin{align} -p'(\!-\!L) = -D(p(\!-\!L) - p^{\mathrm{ext}}), & \hspace{0.5 cm}& \end{align}

where $L \gt 0$ , $D \gt 0$ and $p^{\mathrm{ext}} \in (0,1)$ are constants. The reaction term $f \;:\; [0,1] \rightarrow \mathbb{R}$ is of class $\mathcal{C}^1$ , with three roots $\{0, \theta, 1\}$ where $0 \lt \theta \lt 1$ (see Figure 1(a)). The dynamics of (1.1) can be determined by the structure of steady-state solutions which satisfy (1.2). Note that, by changing variable from $x$ to $y = x/L$ , then (1.2) becomes $p''(y) + L^2 f(p(y)) = 0$ on $(\!-\!1,1)$ with parameter $L^2$ . Thus, we study problem (1.2) with three parameters $L\gt 0, D \gt 0$ and $p^{\mathrm{ext}} \in (0,1)$ .

Figure 1. Sketch of the functions $f$ and $F$ .

The Robin boundary condition considered in (1.1) and (1.2) means that the flow across the boundary points is proportional to the difference between the surrounding density and the density just inside the interval. Here, we assume that $p^{\mathrm{ext}}$ does not depend on space variable $x$ nor time variable $t$ .

The existence of classical solutions for such problems was studied widely in the theory of elliptic and parabolic differential equations (see, e.g. [Reference Pao19]). In our problem, due to difficulties caused by the inhomogeneous Robin boundary condition and the variety of parameters, we cannot obtain the exact multiplicity of solutions. However, our main results in Theorems 2.2 and 2.3 show how the existence of solutions and their ‘shapes’ depend on parameters $D, p^{\mathrm{ext}}$ and $L$ . The idea of phase plane analysis and time mapping method as in [Reference Smoller24] are extended to prove these results.

Since the solutions of (1.2) are equilibria of (1.1), their stability and instability are the next problems that we want to investigate. The stability analysis of the non-constant steady-state solutions is a delicate problem, especially when the system under consideration has multiple steady-state solutions. In Theorem 2.5, we use the principle of linearised stability to give some sufficient conditions for stability. Finally, as a consequence of these theorems, we obtain Corollary 2.1 which provides a comprehensive result about existence and stability of the steady-state solutions when the size $L$ is small.

The main biological application of our results is the control of dengue vectors. Aedes mosquitoes are vectors of many vector-borne diseases, including dengue. Recently, a biological control method using an endosymbiotic bacterium called Wolbachia has gathered a lot of attention. Wolbachia helps reduce the vectorial capacity of mosquitoes and can be passed to the next generation. Massive release of mosquitoes carrying this bacterium in the field is thus considered as a possible method to replace wild mosquitoes and prevent dengue epidemics. Reaction–diffusion equations have been used in previous works to model this replacement strategy (see [Reference Barton and Turelli1, Reference Chan and Kim2, Reference Strugarek and Vauchelet26]). In this work, we introduce the Robin boundary condition to describe the migration of mosquitoes through the boundary. Since inflows of wild mosquitoes and outflows of mosquitoes carrying Wolbachia may affect the efficiency of the method, the study of existence and stability of steady-state solutions depending on parameters $D, p^{\mathrm{ext}}$ and $L$ as in (1.2), (1.1) will provide necessary information to maintain the success of the control method using Wolbachia under the effects of migration.

Problem (1.1) arises often in the study of population dynamics. $p^0$ is usually considered as the relative proportion of one population when there are two populations in competition. This is why, we only focus on solutions with values that belong to the interval $[0,1]$ . Problem (1.1) is derived from the idea in paper [Reference Strugarek and Vauchelet26], where the authors reduce a reaction–diffusion system modelling the competition between two populations $n_1$ and $n_2$ to a scalar equation on the proportion $p = \frac{n_1}{n_1 + n_2}$ . More precisely, they consider two populations with a very high fecundity rate scaled by a parameter $\epsilon \gt 0$ and propose the following system depending on $\epsilon$ for $t \gt 0, x \in \mathbb{R}^d$ :

(1.3a) \begin{align} \partial _t n_1^\epsilon - \Delta n_1^\epsilon = n_1^\epsilon\; f_1(n_1^\epsilon,n_2^\epsilon ), \end{align}
(1.3b) \begin{align} \partial _t n_2^\epsilon - \Delta n_2^\epsilon = n_2^\epsilon\; f_2(n_1^\epsilon,n_2^\epsilon ). \end{align}

The authors obtained that under some appropriate conditions, the proportion $p^\epsilon = \frac{n_1^\epsilon }{n_1^\epsilon + n_2^\epsilon }$ converges strongly in $L^2(0,T;\;L^2(\mathbb{R}^d))$ , and weakly in $L^2(0,T;\;H^1(\mathbb{R}^d))$ to the solution $p^0$ of the scalar reaction–diffusion equation $\partial _t p^0 - \Delta p^0 = f(p^0)$ when $\epsilon \rightarrow 0$ , where $f$ can be given explicitly from $f_1, f_2$ . Now, in order to describe and study the migration phenomenon, we aim here at considering system (1.3) in a bounded domain $\Omega$ and introduce the boundary conditions to characterise the inflow and outflow of individuals as follows;

(1.4a) \begin{align} \frac{\partial n_1^\epsilon }{\partial \nu } = -D(n_1^\epsilon - n_1^{\mathrm{ext},\epsilon }), & \hspace{0.5cm} & (t,x) \in (0,T) \times \partial \Omega, \end{align}
(1.4b) \begin{align} \frac{\partial n_2^\epsilon }{\partial \nu } = -D(n_2^\epsilon - n_2^{\mathrm{ext},\epsilon }), & \hspace{0.5cm} & (t,x) \in (0,T) \times \partial \Omega, \end{align}

where $n_1^{\mathrm{ext},\epsilon }, n_2^{\mathrm{ext},\epsilon }$ depend on $\epsilon$ but do not depend on time $t$ and position $x$ . Equation (1.4) models the tendency of the population to cross the boundary, with rates proportional to the difference between the surrounding density and the density just inside $\Omega$ . Reusing the idea in [Reference Strugarek and Vauchelet26], we prove in Section A that the proportion $p^\epsilon = \frac{n_1^\epsilon }{n_1^\epsilon + n_2^\epsilon }$ converges on any bounded time domain to the solution of (1.1) when $\epsilon$ goes to zero. Hence, we can reduce the system (1.3) and (1.4) to a simpler setting as in (1.1). The proof is based on a relative compactness argument that was also used in previous works about singular limits (e.g. [Reference Hilhorst, Iida, Mimura and Ninomiya8, Reference Hilhorst, Martin and Mimura9, Reference Strugarek and Vauchelet26]), but here, the use of the trace theorem is necessary to prove the limit on the boundary.

The outline of this work is the following. In the next section, we present the setting of the problem and the main results. In Section 3, we provide detailed proof of these results. Section 4 is devoted to an application to the biological control of mosquitoes. We also present numerical simulations to illustrate the theoretical results we obtained. Section A is devoted to proving the asymptotic limit of a 2-by-2 reaction–diffusion system when the reaction rate goes to infinity. Finally, we end this article with a conclusion and perspectives section.

2. Results on the steady-state solutions

2.1 Setting of the problem

In one-dimensional space, consider the system (1.1) in a bounded domain $\Omega = (\!-\!L,L) \subset \mathbb{R}$ . Let $D \gt 0$ , $p^{\mathrm{ext}} \in (0,1)$ be some constant and $p^{\mathrm{init}}(x) \in [0,1]$ for all $x \in (\!-\!L,L)$ . The reaction term $f$ satisfies the following assumptions:

Assumption 2.1 (bistability). Function $f \;:\; [0,1] \rightarrow \mathbb{R}$ is of class $\mathcal{C}^1([0,1])$ and $f(0) = f(\theta ) = f(1) = 0$ with $\theta \in (0,1)$ , $f(q) \lt 0$ for all $q \in (0,\theta )$ , and $f(q) \gt 0$ for all $q \in (\theta,1)$ . Moreover, $\displaystyle \int _{0}^{1} f(s)ds \gt 0$ .

Assumption 2.2 (convexity). There exist $\alpha _1 \in (0,\theta )$ and $\alpha _2 \in (\theta,1)$ such that $f'(\alpha _1) = f'(\alpha _2) = 0$ , $f'(q) \lt 0$ for any $q \in [0,\alpha _1) \cup (\alpha _2,1]$ , and $f'(q) \gt 0$ for $q \in (\alpha _1,\alpha _2)$ . Moreover, $f$ is convex on $(0,\alpha _1)$ and concave on $(\alpha _2,1)$ .

A function $f$ satisfying Assumptions 2.1 and 2.2 is illustrated in Figure 1(a).

Remark 2.1.

  1. (1) Due to Assumption 2.1 and the fact that $p^{\mathrm{ext}} \in (0,1), p^{\mathrm{init}}(x) \in [0,1]$ for any $x$ , one has that $0$ and $1$ are respectively sub- and super-solution of problem (1.1). Since $f$ is Lipschitz continuous on $(0,1)$ then by Theorem 4.1, Section 2.4 in [Reference Pao19], we obtain that problem (1.1) has a unique solution $p^0$ that is in $\mathcal{C}^{1,2}((0,T]\times \Omega )$ with $0 \leq p^0(t,x) \leq 1$ for all $x \in (\!-\!L,L), t \gt 0$ .

  2. (2) Again by Assumption 2.1, $0$ and $1$ are respectively sub- and super-solutions of (1.2). For fixed values of $D, p^{\mathrm{ext}}$ and $L$ , we use the same method as in [Reference Pao19] to obtain that there exists a $\mathcal{C}^2$ solution of (1.2) with values in $[0,1]$ . However, Assumptions 2.1 and 2.2 on $f$ are not enough to conclude the uniqueness of the solution. In the following section, we prove that the stationary problem (1.2) may have multiple solutions and their existence depends on the values of the parameters.

The following proposition shows that solutions of system (1.2) always have at least one extreme value in $(\!-\!L, L)$ .

Proposition 2.1. For any $p^{\mathrm{ext}} \in (0,1)$ and $p^{\mathrm{ext}} \neq \theta$ , system ( 1.2 ) does not have any non-constant monotone solution on the whole interval $(\!-\!L, L)$ .

Proof. Assume that (1.2) admits an increasing solution $p$ on $(\!-\!L,L)$ (the case when $p$ is decreasing on $(\!-\!L,L)$ is analogous). Thus, we have $p'(x) \geq 0$ for all $x \in [\!-\!L,L]$ and $p(L) \gt p(\!-\!L)$ . So thanks to the boundary condition of (1.2), one has

\begin{equation*} D p^{\mathrm {ext}} = p'(L) + Dp(L) \geq Dp(L) \gt Dp(\!-\!L) \geq -p'(\!-\!L) + Dp(\!-\!L) = D p^{\mathrm {ext}}, \end{equation*}

which is impossible. Therefore, we can deduce that the solutions of system (1.2) always admit at least one local extremum on the open interval $(\!-\!L,L)$ .

To study system (1.2), we define function $F$ (see Figure 1(b)) as follows:

(2.1) \begin{equation} F(q) = \displaystyle \int _{0}^{q} f(s)ds, \end{equation}

then $F'(q) = f(q)$ and $F(0) = 0$ . From Assumption 2.1, $F$ reaches the minimal value at $q = \theta$ and the (locally) maximal values at $q = 0$ and $q = 1$ . Since $\displaystyle \int _0^1 f(s) ds \gt 0$ , then $F(1) \gt F(0)$ , it implies that $F(1) = \displaystyle \max _{[0,1]} F;\; F(\theta ) = \displaystyle \min _{[0,1]}F$ . Moreover, since $F(\theta ) \lt F(0)$ and function $F$ is monotone in $(\theta,1)$ ( $F'(q) = f(q) \gt 0$ for any $q \in (\theta,1)$ ). Thus, there exists a unique value $\beta \in (\theta,1)$ such that

(2.2) \begin{equation} F(\beta ) = F(0) = 0. \end{equation}

The main results of the present work concern existence and stability of steady-state solutions of (1.1), that is, solutions of (1.2).

2.2 Existence of steady-state solutions

In our result, we first focus on two types of steady-state solutions defined as follows:

Definition 2.1. Consider a steady-state solution $p(x)$ ,

$p$ is called a symmetric-decreasing (SD) solution when $p$ is symmetric on $(\!-\!L,L)$ with values in $[0,1]$ , decreasing on $(0,L)$ and $p'(0) = 0$ (see Figure 2(a)).

Figure 2. Sketch of the symmetric steady-state solutions $p$ .

Similarly, $p$ is called a symmetric-increasing (SI) solution when $p$ is symmetric on $(\!-\!L,L)$ with values in $[0,1]$ , increasing on $(0,L)$ and $p'(0) = 0$ (see Figure 2(b)).

Any solution which is either SD or SI is called a symmetric-monotone (SM) solution.

The following theorems present the main result of the existence of SM solutions depending on the parameters. For each value of $p^{\mathrm{ext}} \in (0,1)$ and $D \gt 0$ , we find the critical values of $L$ such that (1.2) admits solutions.

Theorem 2.2. In a bounded domain $\Omega = (\!-\!L,L) \subset \mathbb{R}$ , consider the stationary problem (1.2). Assume that the reaction term $f$ satisfies Assumptions 2.1 and 2.2. Then, there exist two functions:

(2.3) \begin{equation} \begin{array}{c@{\quad}r@{\quad}c@{\quad}l} M_{d}, M_i: & (0,1) \times (0,+\infty ) & \longrightarrow & [0,+\infty ], \\[5pt] & (p^{\mathrm{ext}},D) & \longmapsto & M_d(p^{\mathrm{ext}},D), M_i(p^{\mathrm{ext}},D), \end{array} \end{equation}

such that for any $p^{\mathrm{ext}} \in (0,1), D \gt 0$ , problem (1.2) admits at least one SD solution (resp., SI solution) if and only if $L \gt M_{d}(p^{\mathrm{ext}},D)$ (resp., $L \gt M_{i}(p^{\mathrm{ext}},D)$ ), and the values of these solutions are in $[p^{\mathrm{ext}}, 1]$ (resp., $[0,p^{\mathrm{ext}}]$ ). More precisely,

  1. (1) If $0 \lt p^{\mathrm{ext}} \lt \theta$ , then for any $D \gt 0$ , $M_{i}(p^{\mathrm{ext}},D) = 0$ and $M_{d}(p^{\mathrm{ext}},D) \in (0,+\infty )$ . Moreover, if $p^{\mathrm{ext}} \leq \alpha _1$ , the SI solution is unique.

  2. (2) If $\theta \lt p^{\mathrm{ext}} \lt 1$ , then for any $D \gt 0$ , $M_d(p^{\mathrm{ext}},D) = 0$ . If $\alpha _2 \leq p^{\mathrm{ext}}$ , the SD solution is unique. Moreover, consider $\beta$ as in (2.2),

    • if $p^{\mathrm{ext}} \leq \beta$ , then $M_i(p^{\mathrm{ext}},D) \in (0,+\infty )$ for any $D \gt 0$ ;

    • if $p^{\mathrm{ext}} \gt \beta$ , then there exists a constant $D_* \gt 0$ such that $M_i(p^{\mathrm{ext}},D) \in (0,+\infty )$ for any $D \lt D_*$ , and $M_i(p^{\mathrm{ext}}, D) = +\infty$ for $D \geq D_*$ .

  3. (3) If $p^{\mathrm{ext}} = \theta$ , then $M_d(\theta, D) = M_i(\theta, D)= 0$ . Moreover, there exists a constant solution $p \equiv \theta$ .

In the statement of the above result, $M_i = 0$ means that for any $L \gt 0$ , (1.2) always admits SI solutions. $M_i = +\infty$ means that there is no SI solution even when $L$ is large. The same interpretation applies for $M_d$ .

Besides, problem (1.2) can also admit solutions that are neither SD nor SI. The following theorem provides an existence result for those solutions.

Table 1. The existence of steady-state solutions corresponding to values of parameters

Theorem 2.3. In a bounded domain $\Omega = (\!-\!L,L) \subset \mathbb{R}$ , consider the stationary problem (1.2). Assume that the reaction term $f$ satisfies Assumptions 2.1 and 2.2. Then, there exists a function:

(2.4) \begin{equation} \begin{array}{c r c l} M_*: & (0,1) \times (0,+\infty ) & \longrightarrow & [0,+\infty ], \\[5pt] & (p^{\mathrm{ext}},D) & \longmapsto & M_*(p^{\mathrm{ext}},D), \end{array} \end{equation}

such that for any $p^{\mathrm{ext}} \in (0,1), D \gt 0$ , problem (1.2) admits at least one solution which is not SM if and only if $L \geq M_{*}(p^{\mathrm{ext}},D)$ . Moreover,

  • If $p^{\mathrm{ext}} \leq \beta$ , then for any $D \gt 0$ , one has

    (2.5) \begin{equation} 0 \lt M_i(p^{\mathrm{ext}},D) + M_d(p^{\mathrm{ext}},D) \lt M_*(p^{\mathrm{ext}},D) \lt +\infty. \end{equation}
  • If $p^{\mathrm{ext}} \gt \beta$ , then for any $D \lt D_*$ , one has $0 \lt M_i(p^{\mathrm{ext}},D) \lt M_*(p^{\mathrm{ext}},D) \lt +\infty$ . Otherwise, for $D \geq D_*$ , $M_*(p^{\mathrm{ext}},D) = +\infty$ . Here, $D_*$ was defined in Theorem 2.2.

The construction of $M_i, M_d, M_*$ will be done in the proof in Section 3. The idea of the proof is based on a careful study of the phase portrait of (1.2). To make the results more reader-friendly, we present the types of steady-state solutions corresponding to different parameters in Table 1.

In the next section, we present a result about the stability and instability of steady-state solutions of (1.2).

2.3 Stability of steady-state solutions

The definition of stability and instability used in the present work comes from Lyapunov stability

Definition 2.4. A steady-state solution $p(x)$ of (1.1) is called stable if for any constant $\epsilon \gt 0$ , there exists a constant $\delta \gt 0$ such that when $||p^{\mathrm{init}} - p||_{ \infty } \lt \delta$ , one has

(2.6) \begin{equation} ||p^0(t,\cdot ) - p||_{ \infty } \lt \epsilon, \quad \text{ for all } t \gt 0 \end{equation}

where $p^0(t,x)$ is the unique solution of (1.1). If, in addition,

(2.7) \begin{equation} \displaystyle \lim _{t \rightarrow \infty }||p^0(t,\cdot ) - p||_{ \infty } = 0, \end{equation}

then $p$ is called asymptotically stable. The steady-state solution $p$ is called unstable if it is not stable.

The following theorem provides sufficient conditions for the stability of steady-state solutions given in Section 2.2.

Theorem 2.5. In the bounded domain $\Omega = (\!-\!L,L) \subset \mathbb{R}$ , consider the problem (1.1) with the reaction term satisfying Assumptions 2.1 and 2.2. There exists a constant $\lambda _1 \in \left (0,\frac{\pi ^2}{4L^2} \right )$ such that for any steady-state solution $p$ of (1.1),

  • If $f'(p(x)) \gt \lambda _1$ for all $x \in (\!-\!L,L)$ , then $p$ is unstable.

  • If $f'(p(x)) \lt \lambda _1$ for all $x \in (\!-\!L,L)$ , then $p$ is asymptotically stable.

More precisely, $\lambda _1$ is the principal eigenvalue of the linear problem (2.8):

(2.8a) \begin{align} \;\;\;\;\;\;-\phi ''(x) = \lambda \phi (x)\quad x \in (\!-\!L,L), \end{align}
(2.8b) \begin{align} \phi '(L) = -D\phi (L),\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\, \end{align}
(2.8c) \begin{align} \phi '(\!-\!L) = D\phi (\!-\!L),\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\, \end{align}

where $\lambda$ is an eigenvalue with associated eigenfunction $\phi$ . It may be proved that its value is the smallest positive solution of equation $\sqrt{\lambda }\tan{\left (L\sqrt{\lambda }\right )} = D$ (see more details in Section 3).

Note that we cannot apply the first statement if $\displaystyle \sup _{q \in (0,1)} f'(q) \leq \lambda _1$ . However, due to the fact that $\lambda _1 \in \left (0,\frac{\pi ^2}{4L^2}\right )$ , when $L$ gets larger, the value of $\lambda _1$ gets closer to zero and the inequality in the first statement becomes valid.

Remark 2.2. By Assumption 2.2, $f'(q) \leq 0 \lt \lambda _1$ for all $q \in [0,\alpha _1] \cup [\alpha _2,1]$ , we can deduce that the steady-state solutions with values smaller than $\alpha _1$ or larger than $\alpha _2$ are asymptotically stable.

As a consequence of Theorems 2.2, 2.3 and 2.5, the following important result provides complete information about the existence and stability of steady-state solutions in some special cases.

Corollary 2.1. In the bounded domain $\Omega = (\!-\!L, L) \subset \mathbb{R}$ , consider the problem (1.1) with the reaction term satisfying Assumptions 2.1 and 2.2. Then for any $D \gt 0$ , we have

  • If $p^{\mathrm{ext}} \leq \alpha _1$ , for any $L \gt 0$ , there exists exactly one SI steady-state solution $p$ and it is asymptotically stable. Moreover, if $L \lt M_d(p^{\mathrm{ext}},D)$ , then $p$ is the unique steady-state solution of (1.1).

  • If $p^{\mathrm{ext}} \geq \alpha _2$ , for any $L \gt 0$ , there exists exactly one SD steady-state solution $p$ and it is asymptotically stable. Moreover, if $L \lt M_i(p^{\mathrm{ext}},D)$ , then $p$ is the unique steady-state solution of (1.1).

Remark 2.3. This corollary gives us a comprehensive view of the long-time behaviour of solutions of (1.1) when the size $L$ of the domain is small. In this case, the unique steady-state solution $p$ is symmetric, monotone on each half of $\Omega$ and asymptotically stable. Its values will be close to $0$ if $p^{\mathrm{ext}}$ is small and close to $1$ if $p^{\mathrm{ext}}$ is large. We discuss an essential application of this result in Section 4.

3. Proof of the theorems

3.1 Proof of existence

In this section, we use phase plane analysis to prove the existence of both SM and non-SM steady-state solutions depending on the parameters. The studies of SD and SI solutions will be presented, respectively, in Sections 3.1.1 and 3.1.2. Then, using these results, we prove Theorem 2.2. The proof of Theorem 2.3 will be presented after that using the same technique.

First, we introduce the following function:

(3.1) \begin{equation} E(p,p') = \frac{(p')^2}{2} + F(p). \end{equation}

Since $\frac{d}{dx} E(p,p') = p'(p'' + f(p)) = 0$ , then $E(p,p')$ is constant along the orbit of (1.2). From Proposition 2.1, we can deduce that there exists an $x_0 \in (\!-\!L,L)$ such that $p'(x_0) = 0$ , thus one has

(3.2) \begin{equation} E(p(x_0),0) = E(p(x),p'(x)), \end{equation}

for all $x \in (\!-\!L,L)$ . Therefore, the relation between $p'$ and $p$ is as follows:

(3.3) \begin{equation} p' = \pm \sqrt{2F(p(x_0)) - 2F(p)}. \end{equation}

According to this relation, one has a phase plane as in Figure 3(a), in which the curves illustrate the relation between $p'(x)$ and $p(x)$ in (3.3) with respect to different values of $p(x_0)$ . We can see that some curves do not end on the axis $p=0$ but wrap around the point $(\theta,0)$ . This is dues to the fact that for any $p_1 \in [\theta,\beta ]$ , there exists a value $p_2 \in [0,\theta ]$ such that $F(p_1) = F(p_2)$ . Thus, if the curve passes through the point $(p_1,0)$ , it will also pass through the point $(p_2,0)$ on the axis $p'=0$ . Moreover, those curves only exist if their intersection with the axis $p'=0$ has $p$ -coordinate less than or equal to $\beta$ . Besides, the two straight lines show the relation between $p'$ and $p$ at the boundary points. Solutions of (1.2) correspond to those orbits that connect the intersection of the curves with the line $p' = D(p-p^{\mathrm{ext}})$ to the intersection of the curves with the line $p' = -D(p-p^{\mathrm{ext}})$ .

Figure 3. Phase portraits of (1.2): straight lines illustrate the boundary conditions, and solid curves show relations between $p'$ and $p$ . Figure (a): curves $T_1, \ T_2$ and $T_3$ correspond to orbits of SD, SI and non-SM solutions, respectively. Figure (b): curve $T_4$ corresponds to an orbit of a non-SM solution.

In the phase plane in Figure 3(a), orbit $T_1$ describes an SD solution, while orbit $T_2$ corresponds to an SI solution. On the other hand, the solid curve $T_3$ shows the orbit of a steady-state solution that is not SM.

Remark 3.1. (Graphical interpretation of $D_*$ ) The SI solutions (see Figure 2(b)) have orbit as $T_2$ in Figure 3(a). This type of orbits only exists when the lines $p = \pm D(p - p^{\mathrm{ext}})$ intersect the curves wrapping around the point $(\theta,0)$ . In the case when $p^{\mathrm{ext}} \gt \beta$ , the constant $D_* \gt 0$ in Theorem 2.2 is the slope of the tangent line to the curve passing through $(\beta,0)$ as in Figure 3(b). Hence, if $D \gt D_*$ , there exists no SI solution. We construct explicitly the value of $D_*$ in Proposition 3.2 below.

Next, we establish some relations between the solution $p$ and the parameters based on the phase portrait above. For any $x \gt x_0$ , if $p$ is monotone on $(x_0,x)$ , we can invert $x \mapsto p(x)$ into function $p \mapsto X(p)$ . We obtain $X'(p) = \frac{\pm 1}{\sqrt{2F(p(x_0)) - 2F(p)}}$ . By integrating this equation, we obtain that

(3.4) \begin{equation} x - x_0 = \displaystyle \int _{p(x_0)}^{p(x)} \frac{ (\!-\!1)^k ds}{\sqrt{2F(p(x_0)) - 2F(s)}}, \end{equation}

where $k = 1$ if $p$ is decreasing and $k = 2$ if $p$ is increasing on $(x_0,x)$ . We can obtain the analogous formula for $x \lt x_0$ .

First, we focus on SM solutions for which $p'(0) = 0$ , then we analyse the integral in (3.4) with $x = L, x_0 = 0$ . For any $p^{\mathrm{ext}} \in (0,1)$ , using (3.3), we have

(3.5) \begin{equation} F(p(0)) = F(p(L)) + \frac{1}{2} D^2 \left ( p(L) - p^{\mathrm{ext}}\right )^2 = G(p(L)), \end{equation}

for $F$ defined in (2.1) and

(3.6) \begin{equation} G(q) \;:\!=\; F(q) + \frac{1}{2} D^2 (q - p^{\mathrm{ext}} )^2, \end{equation}

and from (3.4) with $x=L, x_0 = 0$ , we have

(3.7) \begin{equation} L = \displaystyle \int _{p(0)}^{p(L)} \frac{ (\!-\!1)^k ds}{\sqrt{2F(p(0)) - 2F(s)}}, \end{equation}

where $k = 1$ if $p$ is decreasing on $(0,L)$ , $k = 2$ if $p$ is increasing on $(0,L)$ .

Thus, the SM solution of (1.2) exists if there exist values $p(L)$ and $p(0)$ that satisfy (3.5) and (3.7). When such values exist, we can assess the value of $p(x)$ for any $x$ in $(\!-\!L,L)$ using (3.4).

Before proving the existence of such values of $p(0)$ and $p(L)$ , we establish some useful properties of the function $G$ defined in (3.6). It is continuous in $[0,1]$ and $G(q)\geq F(q)$ for all $q \in [0,1]$ . Moreover, the following lemma shows that $G$ has a unique minimum point.

Lemma 3.1. For any $p^{\mathrm{ext}} \in (0,1)$ , there exists a unique value $\overline{q} \in (0,1)$ such that $G'(\overline{q}) = 0$ , $G'(q) \lt 0$ for all $q \in [0,\overline{q})$ and $G'(q) \gt 0$ for all $q \in (\overline{q},1]$ . Particularly, $G(\overline{q}) = \displaystyle \min _{[0,1]} G$ .

Proof. We have $ G'(q) = f(q) + D^2(q-p^{\mathrm{ext}})$ . We consider the following cases.

Case 1: When $p^{\mathrm{ext}} = \theta$ , we have $G'(p^{\mathrm{ext}}) = G'(\theta ) = f(\theta ) = 0, G'(q) \lt 0$ for all $q \in (0,\theta )$ and $G'(q) \gt 0$ for all $q \in (\theta,1)$ . Thus $\overline{q} = \theta = p^{\mathrm{ext}}$ .

Case 2: When $p^{\mathrm{ext}} \lt \theta$ , we have $G'(q) \lt 0$ for all $q \in [0,p^{\mathrm{ext}}]$ and $G'(q) \gt 0$ for all $q \in [\theta,1]$ . So there exists at least one value $\overline{q} \in (p^{\mathrm{ext}},\theta )$ such that $G'(\overline{q}) = 0$ .

For any $\overline{q} \in (p^{\mathrm{ext}},\theta )$ such that $G'(\overline{q}) = 0$ , we have $f(\overline{q}) + D^2 (\overline{q} - p^{\mathrm{ext}}) = 0$ so that $D^2 = -\frac{f(\overline{q})}{\overline{q}- p^{\mathrm{ext}}}$ . We can prove that $G''(\overline{q})$ is strictly positive. Indeed, from Assumption 2.2, we have that $\alpha _1$ is the unique value in $(0,\theta )$ such that $f'(\alpha _1) = 0$ , thus $f(\alpha _1) = \displaystyle \min _{[0,\theta ]} f \lt 0$ .

If $\alpha _1 \leq \overline{q} \lt \theta$ then $f'(\overline{q}) \geq 0$ . One has $ G''(\overline{q}) = f'(\overline{q}) + D^2 \gt 0$ .

If $p^{\mathrm{ext}} \lt \overline{q} \lt \alpha _1$ , due to the fact that $f$ is convex in $(0,\alpha _1)$ one has $f'(\overline{q}) \geq \frac{f(\overline{q}) - f(p^{\mathrm{ext}})}{\overline{q} - p^{\mathrm{ext}}}$ . Since $f(p^{\mathrm{ext}}) \lt 0$ , one has $G''(\overline{q}) = f'(\overline{q}) + D^2 = f'(\overline{q}) - \frac{f(\overline{q})}{\overline{q} - p^{\mathrm{ext}}} \gt f'(\overline{q}) + \frac{f(p^{\mathrm{ext}}) -f(\overline{q})}{\overline{q} - p^{\mathrm{ext}}} \geq 0.$ One can deduce that $\overline{q}$ is the unique value in $(0,1)$ such that $G'(\overline{q}) = 0$ and $G(\overline{q}) = \displaystyle \min _{[0,1]} G$ , so it satisfies Lemma 3.1.

Case 3: When $p^{\mathrm{ext}} \gt \theta$ , the proof is analogous to case 2 but using the concavity of $f$ in $(\alpha _2,1)$ . We obtain that there exists a unique value $\overline{q}$ in $(\theta,p^{\mathrm{ext}})$ that satisfies Lemma 3.1.

When $p^{\mathrm{ext}} = \theta$ , it is easy to check that $p\equiv \theta$ is a solution of (1.2). We now analyse two types of SM solutions (see Figure 2) in the following parts.

3.1.1 Existence of SD solutions

In this part, the solution $p$ we study is symmetric on $(\!-\!L,L)$ and decreasing on $(0,L)$ (see Figure 2(a)). So $ p(L) \lt p(x) \lt p(0)$ for any $x \in (0,L)$ . But from (3.3), we have that $F(p(x)) \leq F(p(0))$ , so $F'(p(0)) \geq 0$ . It implies that $p(0) \in [\theta,1]$ . Next, we use two steps to study the existence of SD solutions:

Step 1: Rewriting as a non-linear equation on $p(L)$

For any $q \in (\theta,1)$ , we have $F'(q) = f(q) \gt 0$ so $F|_{(\theta,1)} \;:\; (\theta, 1) \longrightarrow \left (F(\theta ), F(1)\right )$ is invertible. Define $F_1^{-1} \;:\!=\; (F|_{(\theta,1)})^{-1} \;:\; \left ( F(\theta ), F(1)\right ) \longrightarrow (\theta,1)$ , and $F_1^{-1}(F(\theta )) = \theta, F^{-1}_1(F(1)) = 1$ . Then, $F^{-1}_1$ is continuous in $[F(\theta ),F(1)]$ . For any $y \in \left ( F(\theta ),F(1)\right )$ , one has $\left ( F^{-1}_1\right )'(y) = \frac{1}{F'\left ( F^{-1}_1(y)\right )} = \frac{1}{f\left ( F^{-1}_1(y)\right )} \gt 0$ , so $F^{-1}_1$ is an increasing function in $\left ( F(\theta ), F(1)\right )$ . From (3.5) and (3.7), since $p$ is decreasing in $(0,L)$ , we have $L = \displaystyle \int _{p(L)}^{p(0)} \frac{ ds}{\sqrt{2G(p(L)) - 2F(s)}}$ . Denote

(3.8) \begin{equation} \mathcal{F}_1(q) \;:\!=\; \displaystyle \int _{q}^{F_1^{-1}(G(q))}\frac{ds}{\sqrt{2G(q) - 2F(s)}}. \end{equation}

Hence, an SD solution $p$ of system (1.2) has $p(0) = F^{-1}_1(G(p(L)))$ , and $p(L)$ satisfies

(3.9) \begin{equation} L = \mathcal{F}_1(p(L)). \end{equation}

Moreover, one has $p'(x) \leq 0$ for all $x \in (0,L)$ thus $-D(p(L) - p^{\mathrm{ext}}) = p'(L) \leq 0$ . One can deduce that

(3.10) \begin{equation} p(L) \geq p^{\mathrm{ext}}. \end{equation}

Step 2: Solving (3.9) in $[p^{\mathrm{ext}},1]$

The following proposition states the existence of a solution of (3.9).

Proposition 3.1. For any $D \gt 0, p^{\mathrm{ext}} \in (0,1)$ , we have

  1. 1. If $0 \lt p^{\mathrm{ext}} \lt \theta$ , then there exists a constant $M_1 \gt 0$ such that equation (3.9) has at least one solution $p(L) \geq p^{\mathrm{ext}}$ if and only if $L \geq M_1$ .

  2. 2. If $ \theta \leq p^{\mathrm{ext}} \lt 1$ , then equation (3.9) admits at least one solution $p(L) \geq p^{\mathrm{ext}}$ for all $L \gt 0$ . If $p^{\mathrm{ext}} \geq \alpha _2$ , then this solution is unique.

Proof. Since $F_1^{-1}$ is only defined in $[F(\theta ),F(1)]$ , we need to find $p(L) \in [p^{\mathrm{ext}},1]$ such that $G(p(L)) \in [F(\theta ),F(1)]$ .

For all $q \in (0,1)$ , we have $G(q) \geq F(q) \geq F(\theta )$ and from Lemma 3.1, there exists a value $\overline{q} \in (0,1)$ such that $\displaystyle \min _{[0,1]} G = G(\overline{q}) \leq G(p^{\mathrm{ext}}) = F(p^{\mathrm{ext}}) \lt \max _{[0,1]} F = F(1)$ . Moreover, one has $G(1) \gt F(1)$ ; thus, there exists a value $p^* \in (p^{\mathrm{ext}},1)$ such that $G(p^*) = F(1)$ . Then, for all $q \in [p^{\mathrm{ext}},p^*], G(q) \in [F(\theta ),F(1)]$ and we will find $p(L)$ in $[p^{\mathrm{ext}},p^*]$ . Since $F^{-1}_1$ increases in $(F(\theta ),F(1))$ , then $p(0) = F^{-1}_1(G(p(L))) \geq F^{-1}_1(F(p(L))) \geq p(L).$

Function $\mathcal{F}_1$ in (3.8) is well defined and continuous in $[p^{\mathrm{ext}},p^*)$ , $\mathcal{F} \geq 0$ in $[p^{\mathrm{ext}},p^*)$ . Moreover, since $F'(1) = 0$ , one has $\displaystyle \lim _{p \rightarrow p^*}\mathcal{F}_1(p) = \displaystyle \int _{p^*}^{1} \frac{ds}{\sqrt{2F(1) - 2F(s)}} = +\infty$ .

Case 1: If $0 \lt p^{\mathrm{ext}} \lt \theta$ , we will prove that $\mathcal{F}_1$ is strictly positive in $[p^{\mathrm{ext}},p^*)$ . Indeed, for any $y \in [0,1]$ , if $y \lt \theta$ , by the definition of $F^{-1}_1$ , we have $F^{-1}_1(G(y)) \in [\theta,1]$ so $F^{-1}_1(G(y)) \gt y$ . If $y \geq \theta \gt p^{\mathrm{ext}}$ , then $G(y) = F(y) + \frac{1}{2} D^2(y - p^{\mathrm{ext}})^2 \gt F(y)$ so again $F^{-1}_1(G(y)) \gt y$ . Hence, $\mathcal{F}_1(y) \gt 0$ for all $y \in [p^{\mathrm{ext}},p^*)$ . We have $\mathcal{F}_1(p) \rightarrow +\infty$ when $p \rightarrow p^*$ , so there exists $p \in [p^{\mathrm{ext}},p^*)$ such that $M_1\;:\!=\; \mathcal{F}_1(p) = \displaystyle \min _{[p^{\mathrm{ext}},p^*]} \mathcal{F}_1 \gt 0$ , and system (3.9) admits at least one solution if and only if $L \geq M_1$ .

Case 2: If $\theta \leq p^{\mathrm{ext}} \lt 1$ , one has $G(p^{\mathrm{ext}}) = F(p^{\mathrm{ext}})$ , then $F^{-1}_1(G(p^{\mathrm{ext}})) = p^{\mathrm{ext}}$ so $\mathcal{F}_1 (p^{\mathrm{ext}}) = 0$ . On the other hand, $\mathcal{F}_1(p) \rightarrow +\infty$ when $p \rightarrow p^*$ . Thus, for any $L \gt 0$ , there always exists at least one value $p(L) \in (p^{\mathrm{ext}},p^*)$ such that $\mathcal{F}_1(p(L)) = L$ .

Proof of uniqueness: When $p^{\mathrm{ext}} \geq \alpha _2$ , we can prove that $\mathcal{F}_1' \gt 0$ on $(p^{\mathrm{ext}},p^*)$ . Indeed, denoting $\gamma (q) = F_1^{-1}(G(q))$ and changing the variable from $s$ to $t$ such that $s = t\gamma (q) + (1-t)q$ , one has

\begin{equation*} \mathcal {F}_1(q) = \displaystyle \int _{0}^{1} \frac {[\gamma (q) - q]dt}{\sqrt {2F(\gamma (q)) - 2F(t\gamma (q) + (1-t)q)}}. \end{equation*}

To simplify, denote $s(q)= t\gamma (q) + (1-t)q$ . For any $t \in (0,1)$ , one has $q \lt s(q) \lt \gamma (q)$ . Let us define $\Delta F = F(\gamma (q)) - F(s(q))$ , then one has

$ \sqrt{2}\mathcal{F}_1'(q) = \displaystyle \int _0^1 (\gamma '(q)-1) (\Delta F)^{-1/2} dt - \frac{1}{2} \int _0^1 (\Delta F)^{-3/2}(\gamma (q)-q)\frac{d\Delta F}{dq} dt$

$ = \displaystyle \int _0^1 (\Delta F)^{-3/2} \left [ (\gamma '(q)-1) \Delta F - \frac{1}{2} (\gamma (q)-q) (f(\gamma (q)) \gamma '(q) - f(s(q))s'(q)) \right ]$ .

Let $P$ be the formula in the brackets, then

$ \begin{array}{r l} P & = (\gamma '-1) \Delta F - \frac{1}{2} (\gamma -q) \left [f(\gamma ) \gamma ' - f(s)(t\gamma ' + 1 - t)\right ]\\[5pt] & = (\gamma '-1) \left [\Delta F -\frac{1}{2}(\gamma -q)f(\gamma ) + \frac{1}{2}(s-q)f(s)\right ] - \frac{1}{2}(\gamma - q)(f(\gamma ) - f(s)), \end{array}$

Define $\psi (y) \;:\!=\; F(y) - \frac{1}{2}f(y)(y - q)$ for any $y \in [q,\gamma (q)]$ , then one has $\psi '(y) = \frac{1}{2}[f(y) - f'(y)(y-q)] \geq \frac{f(q)}{2}\gt 0$ since $y \geq q \gt p^{\mathrm{ext}} \geq \alpha _2$ and $f$ is concave in $(\alpha _2,1)$ , $f(q) \gt 0$ . Moreover, $f$ is decreasing on $(\alpha _2,1)$ so $0 \lt f(\gamma (q)) \lt f(s(q)) \lt f(q)$ , and $\gamma '(q) = \frac{G'(q)}{f(F_1^{-1}(G(q)))} = \frac{f(q) + D^2 (q - p^{\mathrm{ext}})}{f(\gamma (q))} \gt 1$ . Hence, we can deduce that $P = (\gamma '-1)(\psi (\gamma ) - \psi (s)) - \frac{1}{2}(\gamma - q)(f(\gamma ) - f(s)) \gt 0$ for any $t \in (0,1)$ . This proves that function $\mathcal{F}_1$ is increasing on $(p^{\mathrm{ext}},p^*)$ , so the solution of equation (3.9) is unique.

3.1.2 Existence of SI solutions

In this case, the technique we use to prove the existence of SI solutions is analogous to SD solutions except in the case when $p^{\mathrm{ext}} \gt \beta$ (case 3 below). Since the proof is not straightforward, it is worth to re-establish this technique for SI solutions in two following steps:

Step 1: Rewriting as a non-linear equation on $p(L)$

Since now $p$ is symmetric on $(\!-\!L,L)$ and increasing in $(0,L)$ (see Figure 2(b)), then $ p(0) \lt p(x) \lt p(L)$ for any $x \in (0,L)$ . But from (3.3), we have that $F(p(x)) \leq F(p(0))$ , so $F'(p(0)) \leq 0$ . This implies that $p(0) \in [0,\theta ]$ .

For any $q \in (0,\theta )$ , we have $F'(q) = f(q) \lt 0$ so $F|_{(0,\theta )} \;:\; (0,\theta ) \longrightarrow \left (F(\theta ), F(0)\right )$ is invertible. Define $F_2^{-1} \;:\!=\; (F|_{(0,\theta )})^{-1} \;:\; \left ( F(\theta ), F(0)\right ) \longrightarrow (0,\theta )$ , $F_2^{-1}(F(\theta )) = \theta, F^{-1}_2(F(0)) = 0$ , and $F^{-1}_2$ is continuous in $[F(\theta ),F(0)]$ . For any $y \in \left ( F(\theta ), F(0)\right )$ , $\left ( F^{-1}_2\right )'(y) = \frac{1}{F'\left ( F^{-1}_2(y)\right )} = \frac{1}{f\left ( F^{-1}_2(y)\right )} \lt 0$ , so $F^{-1}_2$ is a decreasing function in $\left ( F(\theta ), F(0)\right )$ . From (3.5) and (3.7), we have $L = \displaystyle \int _{p(0)}^{p(L)} \frac{ds}{\sqrt{2G(p(L)) - 2F(s)}}$ . Denote

(3.11) \begin{equation} \mathcal{F}_2(q) \;:\!=\; \displaystyle \int _{F^{-1}_2(G(q))}^{q} \frac{ds}{\sqrt{2G(q) - 2F(s)}}. \end{equation}

Hence, an SI solution of system (1.2) has $p(0) = F^{-1}_2(G(p(L)))$ , and $p(L)$ satisfies

(3.12) \begin{equation} L = \mathcal{F}_2(p(L)), \end{equation}

and in this case, one needs to find $p(L)$ in $[0,p^{\mathrm{ext}}]$ .

Step 2: Solving of (3.12) in $[0,p^{\mathrm{ext}}]$

The following proposition states the existence of a solution of (3.12).

Proposition 3.2. For any $p^{\mathrm{ext}} \in (0,1)$ , considering the value $\beta$ as in (2.2), we have

  1. 1. If $ 0 \lt p^{\mathrm{ext}} \leq \theta$ , then equation (3.12) admits at least one solution $p$ with $p(L) \leq p^{\mathrm{ext}}$ for all $L \gt 0, D \gt 0$ . If $p^{\mathrm{ext}} \leq \alpha _1$ , this solution is unique.

  2. 2. If $\theta \lt p^{\mathrm{ext}} \leq \beta$ , then for all $D \gt 0$ , there exists a constant $M_2 \gt 0$ such that equation (3.12) has at least one solution $p$ with $p(L) \leq p^{\mathrm{ext}}$ if and only if $L \geq M_2$ .

  3. 3. If $\beta \lt p^{\mathrm{ext}} \lt 1$ , then there exists a constant $D_* \gt 0$ such that when $D \geq D_*$ , equation (3.12) has no solution. Otherwise, there exists a constant $M_3 \gt 0$ such that equation (3.12) has at least one solution $p$ with $p(L) \leq p^{\mathrm{ext}}$ if and only if $L \geq M_3$ .

Proof. As we assume that $F(0) \lt F(1)$ and $F(\theta ) \lt F(0)$ then, due to the continuity of $F$ , one can deduce that there exists a value $\beta \in (\theta,1)$ such that $F(\beta ) = F(0) = 0$ .

Since $F_2^{-1}$ is only defined in $[F(\theta ),F(0)]$ , we need to find $p(L) \in [0,p^{\mathrm{ext}}]$ such that $G(p(L)) \in [F(\theta ),F(0)]$ . For all $q \in (0,1)$ , we have $G(q) \geq F(q) \geq F(\theta )$ , thus equation (3.12) has solutions if and only if $\displaystyle \min _{[0,1]} G \lt F(0)$ . Even when $\displaystyle \min _{[0,1]} G = G(\overline{q}) = F(0)$ , $\mathcal{F}_2$ is still not defined in $[0,1]$ since $\mathcal{F}_2 (\overline{q}) = +\infty$ .

One has the following cases:

Case 1: $0 \lt p^{\mathrm{ext}} \leq \theta$ :

We have $\displaystyle \min _{[0,1]} G = G(\overline{q}) \leq G(p^{\mathrm{ext}}) = F(p^{\mathrm{ext}}) \lt \max _{[0,\theta ]}F = F(0)$ , and $G(0) \gt F(0)$ so there is a value $p_* \in (0,p^{\mathrm{ext}})$ such that $G(p_*) = F(0)$ . Moreover $F'(0) = 0$ , then $\displaystyle \lim _{p \rightarrow p^*}\mathcal{F}_2(p) = +\infty$ . Thus, function $\mathcal{F}_2$ is only well defined and continuous in $(p_*,p^{\mathrm{ext}}]$ .

When $0 \lt p^{\mathrm{ext}} \leq \theta$ , $F^{-1}_2(G(p^{\mathrm{ext}})) = F^{-1}_2(F(p^{\mathrm{ext}})) = p^{\mathrm{ext}}$ so $\mathcal{F}_2 (p^{\mathrm{ext}}) = 0$ . We can deduce that for any $L \gt 0$ , there always exists at least one value $p(L) \in (p_*,p^{\mathrm{ext}})$ such that $\mathcal{F}_2(p(L)) = L$ . When $p^{\mathrm{ext}} \leq \alpha _1$ , arguing analogously to the second case of Proposition 3.1, one has $\mathcal{F}_2' \lt 0$ on $(p_*,p^{\mathrm{ext}})$ , thus the solution is unique.

Case 2: $\theta \lt p^{\mathrm{ext}} \leq \beta$ :

Since $F$ increases on $(\theta,1)$ , then $\displaystyle \min _{[0,1]} G = G(\overline{q}) \lt G(p^{\mathrm{ext}}) = F(p^{\mathrm{ext}}) \leq F(\beta ) = F(0)$ . Analogously to the previous case, $\mathcal{F}_2$ is well defined and continuous in $(p_*,p^{\mathrm{ext}}]$ , $\displaystyle \lim _{p \rightarrow p^*}\mathcal{F}_2(p) = +\infty$ , and $\mathcal{F}_2$ is strictly positive in $(p_*,p^{\mathrm{ext}}]$ . Therefore, there exists $p \in (p_*,p^{\mathrm{ext}}]$ such that

(3.13) \begin{equation} M_2\;:\!=\; \mathcal{F}_2(p) = \min _{[p_*,p^{\mathrm{ext}}]} \mathcal{F}_2 \gt 0, \end{equation}

and system (3.12) admits as least one solution if and only if $L \geq M_2$ .

Case 3: $\beta \lt p^{\mathrm{ext}} \lt 1$ :

Consider the function $H(q) = F(q) + \frac{1}{2} f(q)(p^{\mathrm{ext}} - q)$ defined in an interval $[\theta,p^{\mathrm{ext}}]$ . For any $\theta \lt q \lt p^{\mathrm{ext}}$ , one can prove that $H'(q) \geq 0$ .

Indeed, if $q \leq \alpha _2$ , then $f'(q) \geq 0$ , and $f(q) \gt 0$ . One has $H'(q) = \frac{1}{2} f(q) + \frac{1}{2}f'(q)(p^{\mathrm{ext}} - q) \gt 0$ . If $q \gt \alpha _2$ , from Assumption 2.2, the function $f$ is concave in $(\alpha _2,1)$ , and hence $f'(q)(p^{\mathrm{ext}} - q) \geq f(p^{\mathrm{ext}}) - f(q)$ . Thus,

\begin{equation*} H'(q) = \frac {1}{2} (p^{\mathrm {ext}} - q) \left (f'(q) + \frac {f(q)}{p^{\mathrm {ext}} - q}\right ) \gt \frac {1}{2} (p^{\mathrm {ext}} - q) \left (f'(q) + \frac {f(q) - f(p^{\mathrm {ext}})}{p^{\mathrm {ext}} - q}\right ) \geq 0.\end{equation*}

Therefore, function $H$ increases in $(\theta,p^{\mathrm{ext}})$ . Moreover, $H(\theta ) = F(\theta ) \lt F(0)$ and $H(p^{\mathrm{ext}}) = F(p^{\mathrm{ext}}) \gt F(\beta ) = F(0)$ , and so there exists a unique value $\overline{p}_* \in (\theta,p^{\mathrm{ext}})$ such that $H(\overline{p}_*) = F(0)$ . Take $D_* \gt 0$ such that $D_*^2 = \frac{f(\overline{p}_*)}{p^{\mathrm{ext}} - \overline{p}_*}$ . Then, for any $D \gt 0$ , from Lemma 3.1, there is a unique value $\overline{q} \in (\theta,p^{\mathrm{ext}})$ such that $G'(\overline{q}) = 0$ , $G(\overline{q}) = \displaystyle \min _{[0,1]}G$ , and $D^2 = \frac{f(\overline{q})}{p^{\mathrm{ext}} - \overline{q}}$ . If $D \lt D_*$ , then $\frac{f(\overline{q})}{p^{\mathrm{ext}} - \overline{q}} \lt \frac{f(\overline{p}_*)}{p^{\mathrm{ext}} - \overline{p}_*}$ .

Let $h(q) = \frac{f(q)}{p^{\mathrm{ext}} - q}$ , then $h'(q) = \frac{1}{p^{\mathrm{ext}} - q} \left ( f'(q) + \frac{f(q)}{p^{\mathrm{ext}} - q} \right ) \gt 0$ for $q \in (\theta, p^{\mathrm{ext}})$ . So function $h$ is increasing in $(\theta, p^{\mathrm{ext}})$ , and we can deduce that $\overline{q} \lt \overline{p}_*$ . Hence, $\displaystyle \min _{[0,1]}G = G(\overline{q}) = F(\overline{q}) + \frac{1}{2}D^2(p^{\mathrm{ext}} - \overline{q})^2 = F(\overline{q}) + \frac{1}{2}f(\overline{q})(p^{\mathrm{ext}}- \overline{q}) = H(\overline{q}) \lt H(\overline{p}_*) = F(0)$ .

Moreover, $G(p^{\mathrm{ext}}) = F(p^{\mathrm{ext}}) \gt F(\beta ) = F(0)$ , $G(0) \gt F(0)$ . Thus, there exists a maximal interval $(q_*,q^*) \subset [0,p^{\mathrm{ext}}]$ such that $G(q) \in (F(\theta ),F(0))$ for all $q \in (q_*,q^*)$ . We have $0 \lt q_* \lt \overline{q} \lt q^* \lt p^{\mathrm{ext}}$ and $G(q_*) = G(q^*) = F(0)$ . Therefore, $\mathcal{F}_2$ is well defined and continuous in $(q_*,q^*)$ , and $ \displaystyle \lim _{p \rightarrow q^*}\mathcal{F}_2(p) = \lim _{p \rightarrow q_*} \mathcal{F}_2(p) = +\infty$ . Reasoning like in the previous case, (3.12) admits solution if and only if $L \geq M_3$ , where

(3.14) \begin{equation} M_3 \;:\!=\; \displaystyle \min _{[q_*,q^*]} \mathcal{F}_2 \gt 0, \end{equation}

On the other hand, if $D \geq D_*$ , $\displaystyle \min _{[0,1]} G \geq F(0)$ , and equation (3.12) has no solution.

Proof of Theorem 2.2. As we showed in Section 3.1.1, the SD steady-state solution $p$ of (1.2) has $p(L)$ satisfying equation (3.9). From Proposition 3.1, we can deduce that for fixed $p^{\mathrm{ext}} \in (0,1), D \gt 0$ , $M_d(p^{\mathrm{ext}}, D) = \displaystyle \min _q \mathcal{F}_1(q)$ . Thus, we obtain the results for SD steady-state solutions of (1.2) in Theorem 2.2.

Similarly, Proposition 3.2 provides that for fixed $p^{\mathrm{ext}} \in (0,1), D \gt 0$ , we have $M_i(p^{\mathrm{ext}}, D) = \displaystyle \min _q \mathcal{F}_2(q)$ when $p^{\mathrm{ext}} \leq \beta$ or $D \lt D_*$ . Otherwise, $M_i(p^{\mathrm{ext}}, D) = +\infty$ .

3.1.3 Existence of non-SM solutions

As we can see in the phase portrait in Figure 3, there exist some solutions of (1.2) which are neither SD nor SI. These solutions can be non-symmetric or can have more than one (local) extremum. By studying these cases, we prove Theorem 2.3 as follows

Proof of Theorem 2.3. We can see from Figure 3(a) that for fixed $p^{\mathrm{ext}} \leq \beta, D \gt 0$ , the non-SM solutions $p$ of (1.2) have more than one (local) extreme value because their orbits have at least two intersections with the axis $p' = 0$ (see e.g. $T_3$ ). Those solutions have the same local minimum values, denoted $p_{\mathrm{min}}$ , and the same maximum values, denoted $p_{\mathrm{max}}$ . Moreover, we have $p_{\mathrm{min}} \lt \theta \lt p_{\mathrm{max}}$ , and $F(p_{\mathrm{min}}) = F(p_{\mathrm{max}})$ .

Since the orbits make a round trip of distance $2L$ , then the more extreme values a solution has, the larger $L$ is. Hence, to find the minimal value $M_*$ , we study the case when $p$ has one local minimum and one local maximum with orbit as $T_3$ in Figure 3(a). Then, we have

(3.15) \begin{equation} G(p(\!-\!L)) = G(p(L)) = F(p_{\mathrm{min}}) = F(p_{\mathrm{max}}), \end{equation}

and by using (3.4), we obtain

\begin{equation*} 2L = \mathcal {F}_1((p(\!-\!L)) + \displaystyle \int _{p_{\mathrm {min}}}^{p_{\mathrm {max}}} \frac {ds}{\sqrt {2F(p_{\mathrm {min}})- 2F(s)}} + \mathcal {F}_2(p(L)) \end{equation*}
\begin{equation*} = 2\left [\mathcal {F}_1(p(\!-\!L)) + \mathcal {F}_2(p(L))\right ] + \displaystyle \int _{p(L)}^{p(\!-\!L)} \frac {ds}{\sqrt {2G(p(L))- 2F(s)}}. \end{equation*}

Using the same idea as above, we can show that $L$ depends continuously on $p(L)$ . Moreover, we know that $M_d = \min \mathcal{F}_1,$ $M_i = \min \mathcal{F}_2$ ; therefore, there exists a constant $M_*$ such that (1.2) admits at least one non-SM solution $p$ if and only if $L \geq M_* \gt M_d + M_i$ .

On the other hand, for fixed $p^{\mathrm{ext}} \gt \beta, D \lt D_*$ , it is possible that (1.2) admits a non-symmetric solution with only one minimum. The orbit of this solution is as $T_4$ in Figure 3(b). In this case, we have $G(p(L)) = G(p(\!-\!L)) = F(p_{\mathrm{min}})$ with $p(\!-\!L) \lt p(L)$ and

\begin{equation*} 2L = \mathcal {F}_2(p(\!-\!L)) + \mathcal {F}_2(p(L)) \gt 2M_i. \end{equation*}

Hence, we only need $M_* \gt M_i$ .

3.2 Stability analysis

We first study the principal eigenvalue and eigenfunction for the linear problem. Then by using these eigenelements, we construct the super- and sub-solution of (1.1) and prove the stability and instability corresponding to each case in Theorem 2.5.

Proof of Theorem 2.5. Consider the corresponding linear eigenvalue problem (2.8). We can see that $\phi = \cos{\left (\sqrt{\lambda }x\right )}$ is an eigenfunction iff $\sqrt{\lambda }\tan{\left (L\sqrt{\lambda }\right )} = D$ . Denote $\lambda _1$ the smallest positive value of $\lambda$ which satisfies this equality, thus $L\sqrt{\lambda _1} \in \left ( 0, \frac{\pi }{2}\right )$ . Hence, $\lambda _1 \in \left (0, \frac{\pi ^2}{4L^2} \right )$ . Moreover, for any $x \in (\!-\!L,L)$ , the corresponding eigenfunction $\phi _1(x) = \cos{\left ( \sqrt{\lambda _1}x \right )}$ takes values in $(0,1)$ .

Proof of stability: Now let $p$ be a steady-state solution of (1.1) governed by (1.2). First, we prove that if $f'(p(x)) \lt \lambda _1$ for any $x \in (\!-\!L,L)$ , then $p$ is asymptotically stable. Indeed, since $f'(p(x)) \lt \lambda _1$ , there exist positive constants $\delta, \gamma$ with $\gamma \lt \lambda _1$ such that for any $\eta \in [0,\delta ]$ ,

(3.16) \begin{equation} f(p+\eta ) - f(p) \leq (\lambda _1 - \gamma ) \eta, \qquad f(p) - f(p-\eta ) \leq (\lambda _1 - \gamma ) \eta, \end{equation}

on $(\!-\!L,L)$ . Now consider

\begin{equation*} \overline {p}(t,x) = p(x) + \delta e^{-\gamma t} \phi _1(x), \qquad \underline {p}(t,x) = p(x) - \delta e^{-\gamma t} \phi _1(x). \end{equation*}

Assume that $p^{\mathrm{init}}(x) \leq p(x) + \delta \phi _1(x)$ . Then by (3.16), we have that $\overline{p}$ is a super-solution of (1.1) because

\begin{equation*} \partial _t\overline {p} - \partial _{xx}\overline {p} = (\lambda _1 - \gamma ) \delta e^{-\gamma t} \phi _1(x) + f(p) \geq f(p + \delta e^{-\gamma t} \phi _1(x)) = f(\overline {p}), \end{equation*}

due to the fact that $ 0 \lt \delta e^{-\gamma t} \phi _1(x) \lt \delta$ for any $t \gt 0$ , $x \in (\!-\!L,L)$ . Moreover, at the boundary points one has $\frac{\partial \overline{p}}{\partial \nu } + D(\overline{p} - p^{\mathrm{ext}}) = \frac{\partial p}{\partial \nu } + D(p - p^{\mathrm{ext}}) = 0.$

Similarly, if we have $p^{\mathrm{init}}(x) \geq p(x) - \delta \phi _1(x)$ , and so $\underline{p}$ is a sub-solution of (1.1). Then, by the method of super- and sub-solution (see e.g. [Reference Pao19]), the solution $p^0$ of (1.1) satisfies $\underline{p} \leq p^0 \leq \overline{p}$ . Hence, $ |p^0(t,x) - p(x)| \leq \delta e^{-\gamma t}\phi _1(x)$ . Therefore, we can conclude that, whenever $|p^{\mathrm{init}}(x) - p(x)| \leq \delta \phi _1(x)$ for any $x \in (\!-\!L,L)$ , the solution $p^0$ of (1.1) converges to the steady-state $p$ when $t \rightarrow +\infty$ . This shows the stability of $p$ .

Proof of instability: In the case when $f'(p(x)) \gt \lambda _1$ , there exist positive constants $\delta, \gamma$ , with $\gamma \lt \lambda _1$ , such that for any $\eta \in [0,\delta ]$ ,

(3.17) \begin{equation} f(p+\eta ) - f(p) \geq (\lambda _1 + \gamma ) \eta, \end{equation}

on $(\!-\!L,L)$ .

For any $p^{\mathrm{init}} \gt p$ , there exists a positive constant $\sigma \lt 1$ such that $p^{\mathrm{init}} \geq p + \delta (1- \sigma )$ . Then $\widetilde{p}(t,x) = p(x) + \delta (1 - \sigma e^{-\gamma ' t}) \phi _1(x)$ , with $\gamma ' \lt \gamma$ small enough, is a sub-solution of (1.1). Indeed, by applying (3.17) with $\eta = \delta (1 - \sigma e^{-\gamma ' t}) \phi _1(x) \in [0,\delta ]$ for any $x \in (\!-\!L,L)$ , we have

\begin{equation*} \partial _t \widetilde {p} - \partial _{xx} \widetilde {p} = \gamma ' \delta \sigma e^{-\gamma ' t} \phi _1(x) + \lambda _1 \delta (1 - \sigma e^{-\gamma ' t}) \phi _1(x) + f(p) \leq f(p + \delta (1 - \sigma e^{-\gamma ' t}) \phi _1(x)) \end{equation*}

if $\gamma \geq \frac{\gamma '\sigma e^{-\gamma ' t}}{1 - \sigma e^{-\gamma ' t}} = \frac{\gamma ' \sigma }{e^{\gamma ' t} - \sigma }$ for any $t \geq 0$ . This inequality holds when we choose $\gamma ' \leq \frac{\gamma (1-\sigma )}{\sigma }$ . Now, we have that $\widetilde{p}$ is a sub-solution of (1.1), thus for any $t \geq 0, x \in (\!-\!L,L)$ , the corresponding solution $p^0$ satisfies

\begin{equation*} p^0(t,x) - p(x) \geq \tilde p(t,x)-p(x) \geq \delta (1 - \sigma e^{-\gamma ' t}) \phi _1(x). \end{equation*}

Hence, for a given positive $\epsilon \lt \delta \displaystyle \min _{x} \phi _1(x)$ , when $t \rightarrow +\infty$ , solution $p^0$ cannot remain in the $\epsilon$ -neighbourhood of $p$ even if $p^{\mathrm{init}} - p$ is small. This implies the instability of $p$ .

Proof of Corollary 2.1. For $p^{\mathrm{ext}} \leq \alpha _1 \lt \theta, D \gt 0$ , from Theorem 2.2, the SI steady-state solution $p$ exists for any $L \gt 0$ and is unique, $p(x) \leq p^{\mathrm{ext}} \leq \alpha _1$ for all $x \in (\!-\!L,L)$ . Moreover, from Assumption 2.2, the reaction term $f$ has $f'(q) \lt 0$ , for any $q \in (0,\alpha _1)$ . Then, for any $x \in (\!-\!L,L)$ , $f'(p(x)) \leq 0 \lt \lambda _1$ . Hence, $p$ is asymptotically stable.

Besides, from Theorems 2.2 and 2.3, for any $L \gt 0$ such that $L \lt M_d(p^{\mathrm{ext}}, D) \lt M_*(p^{\mathrm{ext}}, D)$ , (1.1) has neither SD nor non-SM steady-state solutions. So the SI steady-state solution is the unique steady-state solution.

Using a similar argument for the case $p^{\mathrm{ext}} \geq \alpha _2$ , we obtain Corollary 2.1.

4. Application to the control of dengue vectors by the introduction of the bacterium Wolbachia

4.1 Model

In this section, we show an application of our model to the control of mosquitoes using Wolbachia. Mosquitoes of genus Aedes are the vector of many dangerous arboviruses, such as dengue, zika, chikungunya and others. There exists neither effective treatment nor vaccine for these vector-borne diseases, and in such conditions, the main method to control them is to control the vector population. A biological control method using a bacterium called Wolbachia (see [Reference Hoffmann, Montgomery, Popovici, Iturbe-Ormaetxe, Johnson, Muzzi, Greenfield, Durkan, Leong, Dong, Cook, Axford, Callahan, Kenny, Omodei, McGraw, Ryan, Ritchie, Turelli and O’Neill10]) was discovered and developed with this purpose. Besides reducing the ability of mosquitoes to transmit viruses, Wolbachia also causes an important phenomenon called cytoplasmic incompatibility (CI) on mosquitoes. More precisely, if a wild female mosquito is fertilised by a male carrying Wolbachia, its eggs almost cannot hatch. For more details about CI, we refer to [Reference Werren, Baldo and Clark32]. In the case of Aedes mosquitoes, Wolbachia reduces lifespan, changes fecundity, and blocks the development of the virus. However, it does not influence the way mosquitoes move.

In [Reference Strugarek and Vauchelet26], models (1.3) and (1.4) were considered with $n_1 = n_i$ the density of the mosquitoes which are infected by Wolbachia and $n_2 = n_u$ the density of wild uninfected mosquitoes. Consider the following positive parameters:

  • $d_u, \delta d_u$ : death rate of, respectively, uninfected mosquitoes and infected mosquitoes, $\delta \gt 1$ since Wolbachia reduces the lifespan of the mosquitoes;

  • $b_u, (1-s_f)b_u$ : birth rate of, respectively, uninfected mosquitoes and infected ones. Here, $s_f \in [0,1)$ characterises the fecundity decrease;

  • $s_h \in (0,1]$ : the fraction of uninfected females’ eggs fertilised by infected males that do not hatch due to the CI;

  • $K$ : carrying capacity, $A$ : diffusion coefficient.

Parameters $\delta, s_f, s_h$ have been estimated in several cases and can be found in the literature (see [Reference Barton and Turelli1] and references therein). We always assume that $s_f \lt s_h$ (in practice, $s_f$ is close to 0 while $s_h$ is close to 1).

Several models have been proposed using these parameters. In the present study, a system of Lotka–Volterra type is proposed, where the parameter $\epsilon \gt 0$ is used to characterise the high fertility as follows:

(4.1a) \begin{align} \partial _t n_i^\epsilon - A\partial _{xx}n_i^\epsilon = (1-s_f)\frac{b_u}{\epsilon }n_i^\epsilon \left (1 - \frac{n_i^\epsilon + n_u^\epsilon }{K}\right ) - \delta d_u n_i^\epsilon,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \end{align}
(4.1b) \begin{align} \,\partial _t n_u^\epsilon - A\partial _{xx} n_u^\epsilon = \frac{b_u}{\epsilon }n_u^\epsilon \left ( 1- s_h \frac{n_i}{n_i^\epsilon + n_u^\epsilon } \right ) \left (1 - \frac{n_i^\epsilon + n_u^\epsilon }{K}\right ) - d_u n_u^\epsilon, \end{align}

where the reaction term describes birth and death. The factor $\left ( 1- s_h \frac{n_i^\epsilon }{n_i^\epsilon + n_u^\epsilon } \right )$ characterises the CI. Indeed, when $s_h = 1$ , no egg of uninfected females fertilised by infected males can hatch, that is, there is complete CI. The factor becomes $\frac{n_u^\epsilon }{n_i^\epsilon + n_u^\epsilon }$ which means the birth rate of uninfected mosquitoes depends on the proportion of uninfected parents because only an uninfected couple can lay uninfected eggs. Whereas, $s_h = 0$ means that all the eggs of uninfected females hatch. In this case, the factor $\left ( 1- s_h \frac{n_i^\epsilon }{n_i^\epsilon + n_u^\epsilon } \right )$ becomes $1$ , so the growth rate of uninfected population is not altered by the pressure of the infected one.

In paper [Reference Strugarek and Vauchelet26], the same model was studied in the entire space $\mathbb{R}$ . In that case, the system (4.1) has exactly two stable equilibria, namely the Wolbachia invasion steady state and the Wolbachia extinction steady state. In this paper, the authors show that when $\epsilon \rightarrow 0$ and the reaction terms satisfy some appropriate conditions, the proportion $p^\epsilon = \frac{n_i^\epsilon }{n_i^\epsilon + n_u^\epsilon }$ converges to the solution $p^0$ of the scalar equation $\partial _t p^0 - A\partial _{xx} p^0 = f(p^0)$ , with the reaction term:

(4.2) \begin{equation} f(p) = \delta d_u s_h \frac{p(1-p)(p-\theta )}{s_h p^2 - (s_f + s_h)p + 1}, \end{equation}

with $\theta = \frac{s_f + \delta -1}{\delta s_h}$ . We will always assume that $s_f + \delta (1 - s_h) \lt 1$ , so $\theta \in (0,1)$ , and $f$ is a bistable function on $(0,1)$ . The two stable steady states $1$ and $0$ of (1.1) correspond to the success or failure of the biological control using Wolbachia.

4.2 Mosquito population in presence of migration

In this study, the migration of mosquitoes is taken into account. Typically, the inflow of wild uninfected mosquitoes and the outflow of the infected ones may influence the efficiency of the method using Wolbachia. Here, to model this effect, system (4.1) is considered in a bounded domain with appropriate boundary conditions to characterise the migration of mosquitoes. In one-dimensional space, we consider $\Omega = (\!-\!L,L)$ and Robin boundary conditions as in (1.4) at points $x = -L$ , and $x = L$ :

(4.3a) \begin{align} \frac{\partial n_i^\epsilon }{\partial \nu } = & -D(n_i^\epsilon - n_i^{\mathrm{ext},\epsilon }), \end{align}
(4.3b) \begin{align} \frac{\partial n_u^\epsilon }{\partial \nu } = & -D(n_u^\epsilon - n_u^{\mathrm{ext},\epsilon }), \end{align}

where $n_i^{\mathrm{ext},\epsilon }, n_u^{\mathrm{ext},\epsilon }$ do not depend on $t$ and $x$ but depend on parameter $\epsilon \gt 0$ . Denote $p^\epsilon = \frac{n_i^\epsilon }{n_i^\epsilon + n_u^\epsilon }, n^\epsilon = \frac{1}{\epsilon } \left ( 1 - \frac{n_i^\epsilon + n_u^\epsilon }{K} \right )$ . In Section A, we prove that when $\epsilon \rightarrow 0$ , up to extraction of subsequences, $n^\epsilon$ converges weakly to $n^0 = h(p^0)$ for some explicit function $h$ , and $p^\epsilon$ converges strongly towards solution $p^0$ of (1.1) where $p^{\mathrm{ext}}$ is the limit of $\frac{n_i^{\mathrm{ext},\epsilon }}{n_i^{\mathrm{ext},\epsilon } + n_u^{\mathrm{ext},\epsilon }}$ when $\epsilon \rightarrow 0$ , and the reaction term $f$ as in (4.2). Function $f$ satisfies Assumptions 2.1 and 2.2, so the results in Theorems 2.2 and 2.5 can be applied to this problem. By changing the spatial scale, we can normalise the diffusion coefficient into $A = 1$ .

In this application, the parameters $L, D, p^{\mathrm{ext}}$ correspond to the size of $\Omega$ , the migration rate of mosquitoes and the proportion of infected mosquitoes surrounding the boundary. The main results in the present paper give information about the existence and stability of equilibria depending on different conditions for these parameters. Especially, from Corollary 2.1, we obtain that when the size $L$ of the domain is small, there exists a unique equilibrium for this problem and its values depend on the proportion of mosquitoes carrying Wolbachia outside the domain ( $p^{\mathrm{ext}}$ ). More precisely, when $p^{\mathrm{ext}}$ is small (i.e. $p^{\mathrm{ext}} \leq \alpha _1$ ), the solution of (1.1) converges to the steady-state solution close to $0$ , which corresponds to the extinction of mosquitoes carrying Wolbachia. Therefore, in this situation, the replacement strategy fails because of the migration through the boundary. Otherwise, when the proportion outside the domain is high (i.e. $p^{\mathrm{ext}} \geq \alpha _2$ ), then the long-time behaviour of solutions of (1.1) has values close to $1$ , which means that the mosquitoes carrying Wolbachia can invade the whole population.

4.3 Numerical illustration

In this section, we present the numerical illustration for the above results. Parameters are fixed according to biologically relevant data (adapted from [Reference Focks, Haile, Daniels and Mount5]). Time unit is the day, and parameters per day are in Table 2.

Table 2. Parameters for the numerical illustration

Figure 4. Convergence of $p^\epsilon$ to $p^0$ as $\epsilon$ goes to zero. The solid lines represent the solution $p^0(t, x)$ of (1.1) at $t = 50$ days. The dashed lines represent the proportion $p^\epsilon = \frac{n_i^\epsilon }{n_i^\epsilon +n_u^\epsilon }$ of solution $n_i^\epsilon, \ n_u^\epsilon$ of system (4.1) and (4.3) at $t =50$ .

Figure 5. The blue and red solid lines represent, respectively, functions $\mathcal{F}_1$ and $\mathcal{F}_2$ of $p(L)$ .

Figure 6. Case $p^{\mathrm{ext}} = 0.1, D = 0.05$ : the solid lines illustrate the steady-state solutions. The dotted lines show the initial data of problem (1.1). The dashed lines represent the solution $p^0(t,x)$ with $t \in \{10, 20, 40, 60, 100\}$ . The colour of the dashed lines corresponds to the colour of the equilibrium that they converge to.

Then, the reaction term $f$ in (4.2) has $\theta = 0.2375$ , $\beta \approx 0.3633, \alpha _1 \approx 0.12, \alpha _2 \approx 0.7$ . As proposed in Section 3 of the modelling article [Reference Otero, Schweigmann and Solari17], we may pick the value $830\,{\rm m}^2$ per day for the diffusivity of Aedes mosquitoes. Choose $A = 1$ , so the $x$ -axis unit in the simulation corresponds to $\sqrt{830/1} \approx 29$ m.

In the following parts, we check the convergence of $p^\epsilon$ when $\epsilon \rightarrow 0$ in Section 4.3.1. In Section 4.3.2, corresponding to different parameters, we compute numerically the solutions of (1.1) and (1.2) to check their existence and stability.

4.3.1 Convergence to the scalar equation

Consider a mosquito population with a large fecundity rate, that is, $ \epsilon \ll 1$ . Model (4.1) with boundary condition in (4.3) takes into account the migration of mosquitoes.

Fix $D = 0.05, p^{\mathrm{ext}} = 0.1$ and $ L = 2$ , the system (4.1) and (4.3) are solved numerically thanks to a semi-implicit finite difference scheme with three different values of the parameters $\epsilon$ . The initial data are chosen such that $n_i^\epsilon (t = 0) = n_u^\epsilon (t = 0)$ , that is, $p^{\mathrm{init}} = 0.5$ . In Figure 4, at time $t = 50$ days, the numerical solutions of (1.1) are plotted with blue solid lines, and the proportions $p^\epsilon = \frac{n_i^\epsilon }{n_i^\epsilon +n_u^\epsilon }$ are plotted with dashed lines. We observe that when $\epsilon$ goes to 0, the proportion $p^\epsilon$ converges to the solution $p^0$ of system (1.1).

4.3.2 Steady-state solutions

For the different values of $p^{\mathrm{ext}}$ , the values of the integrals $\mathcal{F}_{1}$ and $\mathcal{F}_2$ as functions of $p(L)$ in (3.8) and (3.11) are plotted in Figure 5. For fixed values of $D$ and $p^{\mathrm{ext}}$ , Figure 5 can play the role of bifurcation diagrams that show the relation between the value $p(L)$ of symmetric solutions $p$ and parameter $L$ . Then, we can obtain the critical values of parameter $L$ . Next, we compute numerically the SM steady-state solutions of (1.1) with different values of $L \gt 0, D \gt 0, p^{\mathrm{ext}} \in (0,1)$ .

Numerical method: To approximate the SM steady-state solution, we use the Newton method to solve non-linear equations and follow these steps:

$\circ$ Step 1: Solve $L = \mathcal{F}_{i}(p(L))$ for $i = 1$ or $2$ , and obtain the values of $p(L)$ .

$\circ$ Step 2: Find $p(0)$ by solving (3.5).

$\circ$ Step 3: For each $x$ in $(0,L)$ , interpolate $p(x)$ by solving $x = \displaystyle \int _{p(0)}^{p(x)} \frac{(\!-\!1)^k ds}{\sqrt{2F(p(0)) - 2 F(s)}}$ due to (3.4) with $k = 1$ if $p$ is decreasing and $k = 2$ if $p$ is increasing on $(0, x)$ .

The construction of a non-SM steady-state solution is more sophisticated, since it is hard to find $p(L)$ for a fixed $L$ like in step 1. We presented a numerical non-SM equilibrium in Figure 6(c) where we first fixed a value $p(L)$ . Then similarly to step 2, we solved (3.15) to find all the extreme values of $p$ . Finally, we applied step 3 with $p(0)$ replaced by $p_{\mathrm{min}}$ or $p_{\mathrm{max}}$ .

We also plot the time dynamics of the solution $p^0(t,x)$ of (1.1) at $t =10, 20, 40,60, 100$ to verify the asymptotic stability of steady-state solutions. Next, we consider different values of $p^{\mathrm{ext}}$ and present our observation in each case.

  • Case 1: $p^{\mathrm{ext}} = 0.1 \lt \alpha _1$ .

For $D = 0.05$ fixed, we observe in Figure 5(a) that for any $L \gt 0$ , equation $\mathcal{F}_2(p(L)) = L$ always admits exactly one solution. Thus, there always exists one SI steady-state solution with small values. We approximate that

\begin{equation*} M_d(0.1,0.05) = M_1 \approx 0.8819, \quad M_*(0.1, 0.05) \approx 8.625. \end{equation*}

Also from Figure 5(a), we observe that when $L = M_1$ , a bifurcation occurs and (1.1) admits an SD steady-state solution, and when $L \gt M_1$ one can obtain two SD solutions. Moreover, when $L \geq M_*$ , there exist non-symmetric steady-state solutions. We do numerical simulations for two values of $L$ as follows.

Figure 7. Case $p^{\mathrm{ext}} = 0.8$ : The solid lines illustrate the steady-state solutions. The dotted lines show the initial data of problem (1.1). The dashed lines represent the solution $p^0(t,x)$ with $t \in \{10, 20, 40, 60, 100\}$ . The colour of the dashed lines corresponds to the colour of the equilibrium that they converge to.

For $L = 0.5 \lt M_1$ , the unique equilibrium $\overline{p}_{21}$ is SI and has values close to $0$ (see Figure 6(a)). Solution $p^0$ of (1.1) with any initial data converges to $\overline{p}_{21}$ . This simulation is coherent with the asymptotic stability that we proved in Corollary 2.1.

For $L = 8.96 \gt M_* \gt M_1$ , together with $\overline{p}_{21}$ , there exist two more SD steady-state solutions, namely $\overline{p}_{11}$ and $\overline{p}_{12}$ (see Figure 6(b)). This plot shows that these steady-state solutions are ordered, and the time-dependent solutions converge to either the largest one $\overline{p}_{11}$ or the smallest one $\overline{p}_{21}$ , while $\overline{p}_{12}$ with intermediate values is not an attractor. In Figure 6(c), we find numerically a non-symmetric solution $\overline{p}$ of (1.2) corresponding to orbit $T_3$ as in Figure 3(a). Let the initial value $p^{\mathrm{init}} \equiv \overline{p}$ , then we observe from Figure 6(c) that $p^0$ still converges to the symmetric equilibrium $\overline{p}_{21}$ .

Moreover, the value $\lambda _1$ of Theorem 2.5 in this case is approximately equal to $0.0063$ . We also obtain that for any $x \in (\!-\!L,L)$ ,

\begin{equation*} f'(\overline {p}_{11}(x)) \lt 0,\quad f'(\overline {p}_{21}(x)) \lt 0,\quad f'(\overline {p}_{12}(x)) \gt 0.0462,\quad f'(\overline {p}(x)) \gt 0.022. \end{equation*}

Therefore, by applying Theorem 2.5, we deduce that the steady-state solutions $\overline{p}_{11}, \overline{p}_{21}$ are asymptotically stable, $\overline{p}_{12}$ and the non-symmetric equilibrium $\overline{p}$ are unstable. Thus, the numerical simulations in Figure 6 are coherent to the theoretical results that we proved.

  • Case 2: $p^{\mathrm{ext}} = 0.8 \gt \alpha _2 \gt \beta$ .

In this case, we obtain $D_* \approx 0.16$ . We present numerical illustrations for two cases: $D = 0.05 \lt D_*$ and $D = 0.5 \gt D_*$ .

$\circ$ For $D = 0.05 \lt D_*$ , we have $M_i(0.8, 0.05) = M_2 \approx 10.3646$ (see Figure 5(b)).

For $L = 2 \lt M_2$ , the unique equilibrium $\overline{p}_{11}$ is SD and has values close to $1$ (see Figure 7(a)). The time-dependent solution $p^0$ of (1.1) with any initial data converges to $\overline{p}_{11}$ . This simulation is coherent to the asymptotic stability we obtained in Corollary 2.1.

For $L = 12 \gt M_2$ , together with $\overline{p}_{11}$ , there exist two more SI steady-state solutions, namely $\overline{p}_{21}$ and $\overline{p}_{22}$ , and they are ordered (see Figure 7(b)). In this case, we obtain approximately that $\lambda _1 \approx 0.0063$ and for any $x \in (\!-\!L,L)$ , one has

\begin{equation*} f'(\overline {p}_{11}(x)) \lt 0, \quad f'(\overline {p}_{21}(x)) \in (\!-\!0.0398, 0.0368), \quad f'(\overline {p}_{22}(x)) \in (\!-\!0.0195, 0.0673). \end{equation*}

By sufficient conditions in Theorem 2.5, we obtain that $\overline{p}_{11}$ is asymptotically stable but we cannot conclude the stability for $\overline{p}_{21}$ and $\overline{p}_{22}$ . The time dynamics of $p^0$ in Figure 7(b) suggests that the smallest steady-state solution $\overline{p}_{21}$ is asymptotically stable and $\overline{p}_{22}$ seems to be unstable.

$\circ$ For $D = 0.5 \gt D_*$ , function $\mathcal{F}_2$ is not defined (see Figure 5(c)), so problem (1.2) admits only one SD steady-solution, and we obtain that it is unique and asymptotically stable (see Figure 7(c)).

5. Conclusion and perspectives

We have studied the existence and stability of steady-state solutions with values in $[0,1]$ of a reaction–diffusion equation:

\begin{equation*} \partial _t p - \partial _{xx}p = f(p) \end{equation*}

on an interval $(\!-\!L,L)$ with cubic non-linearity $f$ and inhomogeneous Robin boundary conditions:

\begin{equation*} \frac {\partial p}{\partial \nu } = D(p-p^{\mathrm {ext}}), \end{equation*}

where constant $p^{\mathrm{ext}} \in (0,1)$ is an analogue of $p$ and constant $D \gt 0$ . We have shown how the analysis of this problem depends on the parameters $p^{\mathrm{ext}}$ , $D$ and $L$ . More precisely, the main results say that there always exists a symmetric steady-state solution that is monotone on each half of the domain. For $p^{\mathrm{ext}}$ large, the value of this steady-state solution is close to 1; otherwise, it is close to $0$ . Besides, the larger value of $L$ , the more steady-state solutions this problem admits. We have found the critical values of $L$ so that when the parameters surpass these critical values, the number of steady-state solutions increases. We also provided some sufficient conditions for the stability and instability of the steady-state solutions.

We presented an application of our results on the control of dengue vector using Wolbachia bacterium that can be transmitted maternally. Since Wolbachia can help reduce vectorial capacity of the mosquitoes, the main goal of this method is to replace wild mosquitoes by mosquitoes carrying Wolbachia. In this application, we considered $p$ as the proportion of mosquitoes carrying Wolbachia and used the equation above to model the dynamic of the mosquito population. The boundary condition describes the migration through the border of the domain. This replacement method only works when $p$ can reach an equilibrium close to $1$ . Therefore, the study of the existence and stability of the steady-state solution close to $1$ is meaningful and depends strongly on the parameters $p^{\mathrm{ext}}$ , $D$ and $L$ . In realistic situations, the proportion $p^{\mathrm{ext}}$ of mosquitoes carrying Wolbachia outside the domain is usually low. Using the theoretical results proved in this article, one sees that, to have major chances of success, one should try to treat large regions ( $L$ large), well isolated ( $D$ small) and possibly applying a population replacement method in a zone outside $\Omega$ (to increase $p^{\mathrm{ext}}$ by reducing its denominator).

As a natural continuation of the present work, higher dimension problems and more general boundary conditions can be studied. In more realistic cases, $p^{\mathrm{ext}}$ can be considered to depend on space and the periodic solutions can be the next problem for our study. Besides, when an equilibrium close to $1$ exists and is stable, one may consider multiple strategies using multiple releases of mosquitoes carrying Wolbachia. To optimise the number of mosquitoes released to guarantee the success of this method under the difficulties enlightened by this paper is an interesting problem for future works.

Financial support

This work has received funding from the European Union’s Horizon 2020 research and

innovation programme under the Marie Sklodowska-Curie grant agreement No 945322.

Competing interests

There is no competing interest.

A. Asymptotic limit of reaction–diffusion systems

In [Reference Strugarek and Vauchelet26], the authors reduced a 2-by-2 reaction–diffusion system of Lotka–Volterra type modelling two biological populations to a scalar equation as in (1.1) when the fecundity rate is very large. This limit problem was first proved in the whole domain. In the present study, we prove the limit for a system in a bounded domain with inhomogeneous Robin boundary conditions. In the following part, we recall the necessary assumptions and present results about this problem.

Although the main result of the paper is in one-dimensional space, the following result holds in any dimension $d$ . Let $\Omega \subset \mathbb{R}^d$ be a bounded domain and consider the initial-boundary value problem (A1) depending on parameter $\epsilon \gt 0$ :

(A1a) \begin{align} \partial _t n_1^\epsilon - \Delta n_1^\epsilon = n_1^\epsilon f_1^\epsilon (n_1^\epsilon,n_2^\epsilon ), & & (t,x) \in (0,T) \times \Omega, \end{align}
(A1b) \begin{align} \partial _t n_2^\epsilon - \Delta n_2^\epsilon = n_2^\epsilon f_2^\epsilon (n_1^\epsilon,n_2^\epsilon ), & & (t,x) \in (0,T) \times \Omega, \end{align}
(A1c) \begin{align} n_1^\epsilon (0,x) = n_1^{\mathrm{init},\epsilon }(x), \quad n_2^\epsilon (0,x) = n_2^{\mathrm{init},\epsilon }(x), & & x \in \Omega, \end{align}
(A1d) \begin{align} \frac{\partial n_1^\epsilon }{\partial \nu } = -D(n_1^\epsilon - n_1^{\mathrm{ext},\epsilon }), \quad \frac{\partial n_2^\epsilon }{\partial \nu } = -D(n_2^\epsilon - n_2^{\mathrm{ext},\epsilon }), & & (t,x) \in (0,T) \times \partial \Omega, \end{align}

where we assume that $f_1^\epsilon, f_2^\epsilon$ are smooth enough to guarantee existence and uniqueness of a classical solution for fixed $\epsilon$ . More precisely, the following assumptions are made:

Assumption A.1. (Initial and boundary conditions). Assume that $n_1^{\mathrm{init},\epsilon }, n_2^{\mathrm{init},\epsilon } \in L^\infty (\Omega )$ with $ n_1^{\mathrm{init},\epsilon }, n_2^{\mathrm{init},\epsilon } \geq 0$ and $n_2^{\mathrm{init},\epsilon }$ is not identical to $0$ .

$D \gt 0$ is constant, $n_1^{\mathrm{ext},\epsilon } \geq 0,n_2^{\mathrm{ext},\epsilon } \gt 0$ do not depend on time $t$ and position $x$ .

To study the limit problem, we define the “rescaled total population” $n^\epsilon$ and proportion $p^\epsilon$ by:

\begin{equation*} n^\epsilon \;:\!=\; \frac {1}{\epsilon } - n_1^\epsilon - n_2^\epsilon, \quad p^\epsilon \;:\!=\; \frac {n_1^\epsilon }{n_1^\epsilon + n_2^\epsilon }. \end{equation*}

Next, we recall some assumptions that were proposed in [Reference Strugarek and Vauchelet26] on the families of functions $(f_1^\epsilon, f_2^\epsilon )_{\epsilon \gt 0}$ to study the convergence of $p^\epsilon$ when $\epsilon \rightarrow 0$

Assumption A.2. Function $f_1^\epsilon, f_2^\epsilon$ are of class $\mathcal{C}^2(\mathbb{R}^2_+ \{ 0\})$ , and for $i \in \{1,2\}$ there exists $F_i \in \mathcal{C}^2(\mathbb{R}^2)$ (independent of $\epsilon$ ) such that

(A2) \begin{equation} f_i^\epsilon (n_1^\epsilon,n_2^\epsilon ) = F_i(n^\epsilon,p^\epsilon ). \end{equation}

That is, we may write $f_i^\epsilon (n_1^\epsilon,n_2^\epsilon ) = F_i\left (\frac{1}{\epsilon } - n_1^\epsilon - n_2^\epsilon, \frac{n_1^\epsilon }{n_1^\epsilon + n_2^\epsilon }\right )$ for $i \in \{1,2\}$ .

Then, we can deduce that $p^\epsilon$ and $n^\epsilon$ satisfy system (A3) as follows:

In $(0,T) \times \Omega$ , we have

(A3a) \begin{align} \partial _t n^\epsilon - \Delta n^\epsilon = -\left(\frac{1}{\epsilon } - n^\epsilon \right) \left [p^\epsilon F_1(n^\epsilon,p^\epsilon ) + (1-p^\epsilon )F_2(n^\epsilon,p^\epsilon )\right ], \end{align}
(A3b) \begin{align} \partial _t p^\epsilon - \Delta p^\epsilon + \frac{2\epsilon A}{1- \epsilon n^\epsilon } \nabla p^\epsilon \cdot \nabla n^\epsilon = p^\epsilon (1-p^\epsilon )(F_1-F_2)(n^\epsilon,p^\epsilon ), \end{align}

on the boundary $(0,T) \times \partial \Omega$ , we have

(A3c) \begin{equation} \frac{\partial n^\epsilon }{\partial \nu } = -D(n^\epsilon - n^{\mathrm{ext},\epsilon }), \quad \frac{\partial p^\epsilon }{\partial \nu } = -D(p^\epsilon - p^{\mathrm{ext},\epsilon })\frac{1 - \epsilon n^{\mathrm{ext},\epsilon }}{1 - \epsilon n^{\epsilon }}, \end{equation}

at time $t = 0$ , for any $x \in \Omega$ , the initial data read

(A3d) \begin{equation} n^\epsilon (0,x) = n^{\mathrm{init},\epsilon }(x), \quad p^\epsilon (0,x) = p^{\mathrm{init},\epsilon }(x), \end{equation}

where $(F_1-F_2)(n^\epsilon,p^\epsilon ) = F_1(n^\epsilon,p^\epsilon ) - F_2(n^\epsilon,p^\epsilon )$ , and

\begin{equation*} n^{\mathrm {init},\epsilon } \;:\!=\; \frac {1}{\epsilon } - n_1^{\mathrm {init},\epsilon } - n_2^{\mathrm {init},\epsilon }, \quad p^{\mathrm {init},\epsilon } \;:\!=\; \frac {n_1^{\mathrm {init},\epsilon }}{n_1^{\mathrm {init},\epsilon } + n_2^{\mathrm {init},\epsilon }}, \end{equation*}
\begin{equation*} n^{\mathrm {ext},\epsilon } \;:\!=\; \frac {1}{\epsilon } - n_1^{\mathrm {ext},\epsilon } - n_2^{\mathrm {ext},\epsilon }, \quad p^{\mathrm {ext},\epsilon } \;:\!=\; \frac {n_1^{\mathrm {ext},\epsilon }}{n_1^{\mathrm {ext},\epsilon } + n_2^{\mathrm {ext},\epsilon }}. \end{equation*}

Let us denote $H(n,p) = -p F_1(n,p) - (1-p)F_2(n,p)$ . The following assumption guarantees existence of zeros of $H$ given by $(n, p) = (h(p),p)$ for each $p\in [0,1]$ .

Assumption A.3. In addition to Assumption A.2,

  1. (i) There exists $B \gt 0$ such that for all $n \geq 0$ , $p \in [0,1]$ , $\partial _n H(n,p) \leq -B$ ,

  2. (ii) For all $p \gt 0$ , $H(0,p) \gt 0$ .

Conditions (i) and (ii) imply that for all $p \in [0,1]$ , there exists a unique $n \;=\!:\; h(p) \in \mathbb{R}_+^*$ such that $H(n,p) = 0$ . We have $H \in \mathcal{C}^2(\mathbb{R}^2_+)$ (from Assumption A.2) thus $h \in \mathcal{C}^2(0,1)$ , with $H(h(p),p)= 0$ for all $p \in [0,1]$ .

The following assumptions are made for the initial data and boundary conditions

Assumption A.4. There exists a function $p^{\mathrm{init}} \in L^2(\Omega )$ such that $p^{\mathrm{init}, \epsilon } \displaystyle \underset{\epsilon \rightarrow 0}{\rightharpoonup } p^{\mathrm{init}}$ weakly in $L^2(\Omega )$ . Function $n^{\mathrm{init},\epsilon } - h(0) \in L^2 \cap L^\infty (\Omega )$ is uniformly bounded in $\epsilon \gt 0$ .

Assumption A.5. There exists positive constants $\tilde{\epsilon } \gt 0, \tilde{K} \gt 0$ such that for any $\epsilon \in (0, \tilde{\epsilon })$ , we have $|n^{\mathrm{ext},\epsilon }| \lt \widetilde{K}$ .

There exists a constant $p^{\mathrm{ext}} \in (0,1)$ not depending on $\epsilon$ such that $p^{\mathrm{ext},\epsilon } \underset{\epsilon \rightarrow 0}{\rightarrow } p^{\mathrm{ext}}$

Convergence result. For fixed $\epsilon \gt 0$ , existence of solutions of (A3) is classical (see, e.g. [Reference Perthame20]). Following the idea in [Reference Strugarek and Vauchelet26], we present the asymptotic limit of the proportion $p^\epsilon$ and $n^\epsilon$ in the following theorem.

Theorem A.1. Assume that Assumptions A.1– A.5 are satisfied and consider the solution $(n^\epsilon, p^\epsilon )$ of (A3). Then, for all $T \gt 0$ , we have the convergence:

\begin{equation*} \begin {cases} p^\epsilon \xrightarrow [\epsilon \rightarrow 0]{} p^0 \text { strongly in } L^2(0,T;\;L^2(\Omega )), \text { weakly in } L^2(0,T;\;H^1(\Omega )), \\[5pt] n^\epsilon - h(p^\epsilon ) \xrightarrow [\epsilon \rightarrow 0]{} 0 \text { strongly in } L^2(0,T;\;L^2(\Omega )), \text { weakly in } L^2(0,T;\;H^1(\Omega )), \end {cases} \end{equation*}

where $p^0$ is the unique solution of

\begin{equation*} \begin {cases} \partial _t p^0 - \Delta p^0 = p^0(1-p^0)(F_1-F_2)(h(p^0),p^0), & \text { in } (0,T) \times \Omega, \\[5pt] p^0(0,\cdot ) = p^{\mathrm {init}} & \text { in } \Omega \\[5pt] \frac {\partial p^0}{\partial \nu } = -D(p^0 - p^{\mathrm {ext}}) & \text { on } (0,T)\times \partial \Omega. \end {cases} \end{equation*}

We recall the apriori estimates of [Reference Strugarek and Vauchelet26] without proof and present some bounds on the boundary in Appendix A.1. Then we use the Aubin–Lions lemma and trace theorem to prove the limit in Appendix A.2.

A.1 Uniform a priori estimates

First, we establish the uniform bound with respect to $\epsilon$ in $L^\infty$ in the following lemma

Lemma A.1. Under Assumptions A.1– A.5, for a given value $\epsilon \gt 0$ , let $(n^\epsilon, p^\epsilon )$ be the unique solution of (A3). Then, for any $T \gt 0$ , $0 \leq p^\epsilon \leq 1$ in $[0,T] \times \overline{\Omega }$ for all $\epsilon \gt 0$ . Also, there exists $\epsilon _0 \gt 0, K_0 \gt 0$ such that for any $\epsilon \in (0,\epsilon _0)$ , $|| n^\epsilon ||_{L^\infty ([0,T] \times \Omega )} \leq K_0$ .

Moreover, $n^\epsilon$ is uniformly bounded on $[0,T] \times \partial \Omega$ .

Proof. Using the same method as in Lemma 5 of [Reference Strugarek and Vauchelet26], we obtain the uniform bounds for $p^\epsilon$ in $[0,T] \times \overline{\Omega }$ , and for $n^\epsilon$ in $L^\infty ([0,T] \times \Omega )$ .

Moreover, for any $ x \in \partial \Omega$ , let $\nu$ be the normal outward vector through $x$ . Then, for $\delta \gt 0$ small enough, $x - \delta \nu \in \Omega$ . From the boundary condition for $n^\epsilon$ in (A3), one has for $t \in [0,T]$ , $\displaystyle \lim _{\delta \rightarrow 0^+} \frac{n^\epsilon (t,x) - n^\epsilon (t,x-\delta \nu )}{\delta } = -D(n^\epsilon (t,x) - n^{\mathrm{ext},\epsilon })$ .

So for any $\eta \gt 0$ , there exists $\delta \gt 0$ small such that

\begin{equation*} \left |\frac {n^\epsilon (t,x) - n^\epsilon (t,x-\delta \nu )}{\delta } + D(n^\epsilon (t,x) - n^{\mathrm {ext},\epsilon })\right | \leq \eta \end{equation*}

Thus, $n^\epsilon (t,x) (1+\delta D) \leq n^\epsilon (t,x-\delta \nu ) + \delta D n^{\mathrm{ext},\epsilon } + \delta \eta$ , then for $\eta$ and $\delta$ small enough, for any $\epsilon \lt \epsilon _0$ , $t \in [0,T], x \in \partial \Omega$ , since $x - \delta \nu \in \Omega$ , one has $|n^\epsilon (t,x)| \leq K_0 + \delta D \widetilde{K} + \delta \eta \lt K_1$ . Then $n^\epsilon$ is uniformly bounded on $[0,T] \times \partial \Omega$ and $||n^\epsilon ||_{L^\infty ([0,T] \times \partial \Omega )} \leq K_1$ .

The following lemmas can be proved analogously to the proof in [Reference Strugarek and Vauchelet26].

Lemma A.2. Under Assumptions A.1– A.5, for $\epsilon \gt 0$ small enough, let $(n^\epsilon, p^\epsilon )$ be the unique solution of (A3). We have the following uniform estimates:

(A4) \begin{equation} \displaystyle \epsilon \int _{0}^{T} \int _\Omega |\nabla n^\epsilon |^2 dxdt \leq C_0, \displaystyle \int _{0}^{T} \int _\Omega |\nabla p^\epsilon |^2 dxdt \leq \overline{C}, \end{equation}

for some positive constants $C_0$ and $\overline{C}$ .

Denote $M^\epsilon \;:\!=\; n^\epsilon - h(p^\epsilon )$ where $h$ is defined in Assumption A.3. The following provide the convergence of $M^\epsilon$ .

Lemma A.3. Let $T \gt 0$ , under Assumptions A.1– A.5, one has $M^\epsilon \displaystyle \rightarrow 0$ in $L^2(0,T;\; L^2(\Omega ))$ when $\epsilon \rightarrow 0$ .

Now, we provide a uniform estimate for $\partial _t p^\epsilon$ with respect to $\epsilon$ in the following lemma.

Lemma A.4. Under Assumptions A.1– A.5, for $\epsilon \gt 0$ small enough, $\partial _t p^\epsilon$ is uniformly bounded in $L^2(0,T;\;X')$ with respect to $\epsilon$ , where $ X = H^1(\Omega ) \cap L^\infty (\Omega )$ .

A.2 Proof of convergence

The idea to prove Theorem A.1 is relied on the relative compactness obtained from the Aubin–Lions lemma below (see [Reference Simon23])

Lemma A.5 (Aubin–Lions). Let $T \gt 0$ , $q \in (1,\infty )$ , and $(\psi _n)_n$ a bounded sequence in $L^q(0,T;\;B)$ , where $B$ is a Banach space. If $(\psi _n)$ is bounded in $L^q(0,T;\;X)$ and $X$ embeds compactly in $B$ , and if $(\partial _t\psi _n)_n$ is bounded in $L^q(0,T;\;X')$ uniformly with respect to $n$ , then $(\psi _n)_n$ is relatively compact in $L^q(0,T;\;B)$ .

Proof of Theorem A.1. We use three steps to proof Theorem A.1. First, we obtain the relative compactness of $(p^\epsilon )$ by applying Aubin–Lions lemma and prove that there exists (up to extracting subsequences) a limit function. Then, we study its behaviour on the boundary using the trace theorem. Finally, thanks to our uniform bounds, we show that the limit function satisfies a problem whose solution is unique.

Step 1: In our problem, we need to apply the Lions–Aubin lemma with $q= 2, B = L^2(\Omega )$ and $X = H^1(\Omega ) \cap L^\infty (\Omega )$ to $(\psi _\epsilon ) = (p^\epsilon )_\epsilon$ . The compact embedding from $X$ to $B$ is valid by the Rellich–Kondrachov theorem. In the previous section, we have already obtained uniform estimates that are sufficient to apply the Aubin–Lions lemma. The sequence $(p^\epsilon )_\epsilon$ is bounded in $L^2(0,T;\;L^2(\Omega ))$ due to Lemma A.1:

\begin{equation*} || p^\epsilon ||^2_{L^2(0,T;\;L^2(\Omega ))} = \displaystyle \int _{0}^{T} \int _\Omega |p^\epsilon |^2 dx dt \leq || p^\epsilon ||^2_{L^\infty (0,T;\;L^2(\Omega ))} \mathrm { meas}(\Omega ) T \lt \infty, \end{equation*}

for $\epsilon \lt \epsilon _0$ small enough. Then, due to Lemma A.2, this sequence is bounded in $L^2(0,T;\;X)$ . The sequence $(\partial _t p^\epsilon )_\epsilon$ is bounded in $L^2(0,T;\;X')$ by Lemma A.4. Thus, we can apply Aubin–Lions lemma and deduce that $(p^\epsilon )_\epsilon$ is strongly relatively compact in $L^2(0,T;\;L^2(\Omega ))$ . Therefore, there exists $p^0 \in L^2(0,T;\;H^1(\Omega ))$ such that, up to extraction of subsequences, we have $p^\epsilon \rightarrow p^0$ strongly in $L^2((0,T) \times \Omega )$ and almost everywhere, $\nabla p^\epsilon \rightharpoonup \nabla p^0$ weakly in $L^2((0,T) \times \Omega )$ .

Moreover, by the triangle inequality we have $|n^\epsilon - h(p^0)| \leq |n^\epsilon - h(p^\epsilon )| + |h(p^\epsilon ) - h(p^0)| \leq |n^\epsilon - h(p^\epsilon )| + ||h'||_{L^\infty ([0,1])}|p^\epsilon - p^0|$ . From the strong convergence of $p^\epsilon$ and $M^\epsilon$ in Lemma A.3 when $\epsilon \rightarrow 0$ , we can deduce the following strong convergence in $L^2(0,T;\;L^2(\Omega ))$ :

(A5) \begin{equation} n^\epsilon \rightarrow n^0 \;:\!=\; h(p^0) \end{equation}

Step 2: Now, let us focus on the behaviour on the boundary of the domain. Let the linear operator $\gamma$ be the trace operator on the boundary $(0,T) \times \partial \Omega$ . For any $\epsilon \in (0,\epsilon _0)$ small enough, we have $\gamma (p^\epsilon ) = p^\epsilon \mid _{(0,T) \times \partial \Omega }$ , then by the trace theorem, one has

\begin{equation*} || \gamma (p^{\epsilon })||_{L^2(0,T;\;L^2(\partial \Omega ))} \leq C || p^{\epsilon }||_{L^2(0,T;\;H^1(\Omega ))} \end{equation*}

where the constant $C$ only depends on $\Omega$ . Then

\begin{equation*} || \gamma (p^{\epsilon })||^2_{L^2(0,T;\;L^2(\partial \Omega ))} \leq C^2 \displaystyle \int _{0}^{T} \int _\Omega | p^\epsilon |^2 dx dt + C^2 \displaystyle \int _{0}^{T} \int _\Omega | \nabla p^\epsilon (t,\cdot ) |^2 dx dt \lt \infty, \end{equation*}

due to Lemmas A.1 and A.2. Hence, we can deduce that $\gamma (p^\epsilon )$ is weakly convergent in $L^2((0,T) \times \partial \Omega )$ . Let $\gamma ^0 \;:\!=\; \displaystyle \lim _{\epsilon \rightarrow 0}\gamma (p^\epsilon )$ . For any function $\psi \in C^1(\overline{\Omega })$ , and for $i = 1,\dots, d$ , by Green’s formula one has

\begin{equation*} \int _\Omega \partial _i p^\epsilon \psi dx = -\int _\Omega p^\epsilon \partial _i \psi + \int _{\partial \Omega } \psi \gamma (p^\epsilon ) \nu _i dS.\end{equation*}

Since $p^\epsilon$ converges weakly to $p^0$ in $H^1(\Omega )$ , when $\epsilon \rightarrow 0$ , one has

\begin{equation*} \int _\Omega \partial _i p^0 \psi dx = -\int _\Omega p^0 \partial _i \psi + \int _{\partial \Omega } \psi \gamma ^0 \nu _i dS. \end{equation*}

We can deduce that $\gamma ^0 = \gamma (p^0)$ .

Step 3: We pass to the limit in the weak formulation of (A3), for any test function $\psi$ such that $\psi \in C^2([0,T] \times \overline{\Omega }), \psi (T,\cdot ) = 0$ in $\Omega,$ one has

\begin{align*} & - \underset{\text{strong convergence}}{\underbrace{\int _{0}^{T} \int _\Omega p^\epsilon \partial _t\psi dx dt}} \displaystyle + A \underset{\text{weak convergence}}{\underbrace{\int _{0}^{T} \int _\Omega \nabla p^\epsilon \cdot \nabla \psi dx dt}} =\underset{\text{weak convergence}}{\underbrace{\int _{\Omega } p^{\mathrm{init},\epsilon } \psi (0,\cdot ) dx}} \\[5pt] & - 2\epsilon A \underset{\text{bounded as } \epsilon \rightarrow 0}{\underbrace{\int _{0}^{T} \int _\Omega \frac{\psi }{1 - \epsilon n^\epsilon } \nabla p^\epsilon \nabla n^\epsilon dx dt}} + \underset{\text{strong convergence}}{\underbrace{\int _{0}^{T} \int _\Omega \psi p^\epsilon (1 - p^\epsilon ) (F_1 - F_2)(n^\epsilon,p^\epsilon ) dx dt}} \\[5pt] & - DA \underset{\text{weak convergence}}{\underbrace{\int _{0}^{T} \int _{\partial \Omega } (p^\epsilon - p^{\mathrm{ext},\epsilon })\frac{1 - \epsilon n^{\mathrm{ext}, \epsilon }}{1 - \epsilon n^\epsilon }dS}}. \end{align*}

The weak convergence of the last term on the boundary is obtained from Lemma A.1 and Assumption A.5. When $\epsilon \lt \epsilon _0$ , we have $n^{\mathrm{ext},\epsilon }, n^\epsilon$ are uniformly bounded on $(0,T) \times \Omega$ with respect to $\epsilon$ , then $\frac{1 - \epsilon n^{\mathrm{ext}, \epsilon }}{1 - \epsilon n^\epsilon }$ converges strongly to $1$ when $\epsilon \rightarrow 0$ . From the previous step, one has $p^\epsilon |_{\partial \Omega } = \gamma (p^\epsilon ) \rightharpoonup \gamma (p^0)$ weakly in $L^2((0,T) \times \partial \Omega )$ . Passing to the limit, we obtain that $p^0 \in L^2(0,T;\;H^1(\Omega ))$ is a weak solution of the following problem:

\begin{equation*}\begin {cases} \partial _t p^0 - A\Delta p^0 = p^0(1-p^0)(F_1-F_2)(n^0,p^0) & \text { in } (0,T) \times \Omega, \\[5pt] p^0(0,\cdot ) = p^{\mathrm {init}} & \text { in } \Omega \\[5pt] \frac {\partial p^0}{\partial \nu } = -D(p^0 - p^{\mathrm {ext}}) & \text { on } (0,T)\times \partial \Omega. \end {cases}\end{equation*}

Using (A5), we can deduce that this problem is a self-contained initial-boundary value problem. Moreover, since $0$ and $1$ are, respectively, sub- and super-solutions of this problem, it admits a unique classical solution with values in $[0,1]$ . Hence, all the extracted subsequences converge to the same limit $p^0$ and $p^0|_{\partial \Omega } = \gamma (p^0)$ .

References

Barton, N. H. & Turelli, M. (2011) Spatial waves of advance with bistable dynamics: cytoplasmic and genetic analogues of allee effects. Am. Nat. 178, 4875.CrossRefGoogle ScholarPubMed
Chan, M. H. T. & Kim, P. S. (2013) Modelling a wolbachia invasion using a slow-fast dispersal reaction-diffusion approach. Bull. Math. Biol. 75(9), 15011523.CrossRefGoogle ScholarPubMed
Daners, D. (2000) Robin boundary value problems on arbitrary domains. Trans. Am. Math. Soc. 352, 42074236.CrossRefGoogle Scholar
Fife, P. C. (1979). Mathematical Aspects of Reacting and Diffusing Systems, 1st ed., Springer, Berlin, Heidelberg.CrossRefGoogle Scholar
Focks, D. A., Haile, D. G., Daniels, E. & Mount, G. A. (1993) Dynamic life table model for aedes aegypti (diptera: Culicidae): analysis of the literature and model development. J. Med. Entomol. 30(6), 10031017.CrossRefGoogle ScholarPubMed
Goddard, J. & Shivaji, R. (2017). Stability analysis for positive solutions for classes of semilinear elliptic boundary-value problems with nonlinear boundary conditions. Proc. R. Soc. Edinburgh: Sect. A Math. 147(5), 10191040.CrossRefGoogle Scholar
Gordon, P. V., Ko, E. & Shivaji, R. (2014) Multiplicity and uniqueness of positive solutions for elliptic equations with nonlinear boundary conditions arising in a theory of thermal explosion. Nonlinear Anal. Real World Appl. 15, 5157.CrossRefGoogle Scholar
Hilhorst, D., Iida, M., Mimura, M. & Ninomiya, H. (2008) Relative compactness in lp of solutions of some 2m components competition-diffusion systems. Discrete Contin. Dyn. Syst. 21(1), 233244.CrossRefGoogle Scholar
Hilhorst, D., Martin, S. & Mimura, M. (2012) Singular limit of a competition-diffusion system with large interspecific interaction. J. Math. Anal. Appl. 390(2), 488513.CrossRefGoogle Scholar
Hoffmann, A. A., Montgomery, B. L., Popovici, J., Iturbe-Ormaetxe, I., Johnson, P. H., Muzzi, F., Greenfield, M., Durkan, M., Leong, Y. S., Dong, Y., Cook, H., Axford, J., Callahan, A. G., Kenny, N., Omodei, C., McGraw, E. A., Ryan, P. A., Ritchie, S. A., Turelli, M. & O’Neill, S. L. (2011) Successful establishment of wolbachia in aedes populations to suppress dengue transmission. Nature 476, 73617457.CrossRefGoogle ScholarPubMed
Korman, P. (2002) Exact multiplicity of solutions for a class of semilinear Neumann problems. Commun. Appl. Nonlinear Anal. 9(4), 1–17.Google Scholar
Korman, P. (2006) Chapter 6 global solution branches and exact multiplicity of solutions for two point boundary value problems. In: Handbook of Differential Equations: Ordinary Differential Equations, Vol. 3, North-Holland, pp. 547–606.CrossRefGoogle Scholar
Korman, P., Li, Y. & Ouyang, T. (1996) Exact multiplicity results for boundary value problems with nonlinearities generalising cubic. Proc. R. Soc. Edinburgh Sect. A: Math. 126(3), 599616. Publisher: Royal Society of Edinburgh Scotland Foundation.CrossRefGoogle Scholar
Korman, P., Li, Y. & Ouyang, T. (1997) An exact multiplicity result for a class of semilinear equations. Commun. Partial Differ. Equations 22, 34.Google Scholar
Lions, P. (1982) On the existence of positive solutions of semilinear elliptic equations. SIAM Rev. 24(4), 441467.CrossRefGoogle Scholar
Murray, J. D. (2003). Mathematical Biology II: Spatial Models and Biomedical Applications, Springer Science & Business Media, New York.CrossRefGoogle Scholar
Otero, M., Schweigmann, N. & Solari, H. G. (2008) A stochastic spatial dynamical model for Aedes aegypti. Bull. Math. Biol. 70(5), 12971325.CrossRefGoogle ScholarPubMed
Ouyang, T. & Shi, J. (1998) Exact multiplicity of positive solutions for a class of semilinear problems. J. Differ. Equations 146(1), 121156.CrossRefGoogle Scholar
Pao, C. V. (1992). Nonlinear Parabolic and Elliptic Equations, 1st ed., Springer, Boston, MA.Google Scholar
Perthame, B. (2015). Parabolic Equations in Biology, Springer, Lecture Notes on Mathematical Modelling in the Life Sciences, Springer, Switzerland.CrossRefGoogle Scholar
Schaaf, R. (1984) Global behaviour of solution branches for some Neumann problems depending on one or several parameters. J. für Die Reine und Angewandte Math. 346, 131.Google Scholar
Shi, S.& Li, S. (2009) Existence of solutions for a class of semilinear elliptic equations with the Robin boundary value condition. Nonlinear Anal. Theory Methods Appl. 71, 32923298.CrossRefGoogle Scholar
Simon, J. (1986) Compact sets in the space $l^p(o,t;\; b)$ . Annali di Matematica Pura ed Applicata, 6596.CrossRefGoogle Scholar
Smoller, J. (2012). Shock Waves and Reaction–Diffusion Equations, Springer Science & Business Media, Vol. 258.Google Scholar
Smoller, J. & Wasserman, A. (1981) Global bifurcation of steady-state solutions. J. Differ. Equations 39(2), 269290.CrossRefGoogle Scholar
Strugarek, M. & Vauchelet, N. (2016) Reduction to a single closed equation for 2 by 2 reaction-diffusion systems of Lotka-Volterra type. SIAM J. Appl. Math. 76(5), 20602080.CrossRefGoogle Scholar
Tsai, C.-C., Wang, S.-H. & Huang, S.-Y. (2018) Classification and evolution of bifurcation curves for a one-dimensional Neumann-Robin problem and its applications. Electron. J. Qual. Theory Differ. Equations 85(85), 130.CrossRefGoogle Scholar
Wang, S.-H. (1989) A correction for a paper by j. smoller and a. wasserman. J. Differ. Equations 77(1), 199202.CrossRefGoogle Scholar
Wang, S.-H. & Kazarinoff, N. D. (1992) Bifurcation of steady-state solutions of a scalar reaction-diffusion equation in one space variable. J. Aust. Math. Soc. 52(3), 343355.CrossRefGoogle Scholar
Wang, Y. & Shi, J. (2019) Persistence and extinction of population in reaction-diffusion-advection model with weak Allee effect growth. SIAM J. Appl. Math. 79, Publisher: Society for Industrial and Applied Mathematics, 12931313.CrossRefGoogle Scholar
Wang, Y., Shi, J. & Wang, J. (2019) Persistence and extinction of population in reaction-diffusion-advection model with strong Allee effect growth. J. Math. Biol. 78(7), 20932140.CrossRefGoogle ScholarPubMed
Werren, J., Baldo, L. & Clark, M. (2008) Wolbachia: master manipulators of invertebrate biology. Nat. Rev. Microbiol. 6, 741751.CrossRefGoogle ScholarPubMed
Zhang, J., Li, S. & Xue, X. (2012) Multiple solutions for a class of semilinear elliptic problems with Robin boundary condition. J. Math. Anal. Appl. 388, 435442.CrossRefGoogle Scholar
Figure 0

Figure 1. Sketch of the functions $f$ and $F$.

Figure 1

Figure 2. Sketch of the symmetric steady-state solutions $p$.

Figure 2

Table 1. The existence of steady-state solutions corresponding to values of parameters

Figure 3

Figure 3. Phase portraits of (1.2): straight lines illustrate the boundary conditions, and solid curves show relations between $p'$ and $p$. Figure (a): curves $T_1, \ T_2$ and $T_3$ correspond to orbits of SD, SI and non-SM solutions, respectively. Figure (b): curve $T_4$ corresponds to an orbit of a non-SM solution.

Figure 4

Table 2. Parameters for the numerical illustration

Figure 5

Figure 4. Convergence of $p^\epsilon$ to $p^0$ as $\epsilon$ goes to zero. The solid lines represent the solution $p^0(t, x)$ of (1.1) at $t = 50$ days. The dashed lines represent the proportion $p^\epsilon = \frac{n_i^\epsilon }{n_i^\epsilon +n_u^\epsilon }$ of solution $n_i^\epsilon, \ n_u^\epsilon$ of system (4.1) and (4.3) at $t =50$.

Figure 6

Figure 5. The blue and red solid lines represent, respectively, functions $\mathcal{F}_1$ and $\mathcal{F}_2$ of $p(L)$.

Figure 7

Figure 6. Case $p^{\mathrm{ext}} = 0.1, D = 0.05$: the solid lines illustrate the steady-state solutions. The dotted lines show the initial data of problem (1.1). The dashed lines represent the solution $p^0(t,x)$ with $t \in \{10, 20, 40, 60, 100\}$. The colour of the dashed lines corresponds to the colour of the equilibrium that they converge to.

Figure 8

Figure 7. Case $p^{\mathrm{ext}} = 0.8$: The solid lines illustrate the steady-state solutions. The dotted lines show the initial data of problem (1.1). The dashed lines represent the solution $p^0(t,x)$ with $t \in \{10, 20, 40, 60, 100\}$. The colour of the dashed lines corresponds to the colour of the equilibrium that they converge to.