1. Introduction
Mature neurons are highly polarised cells featuring functionally distinct compartments, the axons and the dendrites. Axons are ‘cables’ that have the ability to transmit electrical signals to other neurons and can extend up to a length of 1 m in humans. Dendrites form complex tree-like structures and act as recipients for axons of other neurons. This polarity is established during the maturing proceess as initially, newborn neurons feature several undifferentiated extensions of similar length called neurites that are highly dynamic [Reference Cooper8, Reference Hatanaka and Yamauchi18]. Eventually, one of these neurites is selected to become the axon. This is often called neurite outgrowth. The understanding of this process is still incomplete, despite progress in characterising the role of molecular mechanisms as well as influence of intra- and extracellular signaling molecules, see [Reference Takano, Xu, Funahashi, Namba and Kaibuchi30] for more details and further references. In this work, we focus on a single aspect of this process, namely the fact that the actual growth or shrinkage of neurites is due to the insertion or retraction of vesicles (i.e., circular structures composed of lipid membranes) at the outer tips of the neurites (growth cones). The vesicles themselves are produced in the cell body (soma) and then form complexes with motor proteins that allow for active transport along microtubules. The direction of transport is determined by the type of motor protein: kinesin results in anterograde transport (into the growth cones), while dynein motors move vesicles retrogradely to the soma. Both kinesins and dyneins are present on vesicles during their transport along microtubules, but only one of them is usually active at any given time [Reference Encalada, Szpankowski, Xia and Goldstein13, Reference Twelvetrees, Pernigo and Sanger33]; see Figure 1 for a sketch. The actual increase of the surface area of the plasma membrane is then due to the insertion of vesicles into the growth cone (exocytosis). Retraction, on the other hand, is accompanied by the removal of membrane material from the growth cone through endocytosis [Reference Pfenninger, Laurino and Peretti26, Reference Pfenninger27, Reference Tojima and Kamiguchi31]. Clearly, the (dis-)assembly of microtubules during growth and retraction is important, yet we neglect this effect in the present study in order to not further complicate the model and as we are primarily interested in the role of vesicle transport. Addition of microtubule dynamics is postponed to future work.
1.1. Relation to existing work
While there are different models for the underlying biochemical processes of selecting the neurite which eventually becomes the axon (see also [Reference Oliveri and Goriely25] for a recent review), mathematical models examining the role of vesicle transport in this process are relatively scarce. On the other hand, there are several models for molecular motor-based transport, also in axons [Reference Bressloff and Levien5, Reference Friedman and Craciun15, Reference Friedman and Craciun16, Reference Newby and Bressloff24, Reference Smith and Simmons29]. All these models feature linear transport terms which do not take into account size exclusion or finite volume effects. Our starting point is a model with non-linear transport terms proposed in [Reference Bressloff and Karamched4] which, again, focuses on transport in a grown axon. In particular in [Reference Bressloff and Karamched4], a limited transport capacity inside the neurites is taken into account by size exclusion effects and antero- and retrogradely moving particles are modelled separately. We will use this approach as a basis for the transport within the neurites in our model. In [Reference Humpert, Di Meo, Püschel and Pietschmann19], a similar approach is taken, yet on a microscopic particle level. Furthermore, [Reference Humpert, Di Meo, Püschel and Pietschmann19] extends the model by coupling two copies of it to pools representing the amount of vesicles present at the soma and growth cones, respectively. The aim of this paper is to introduce a macroscopic model in the spirit of both [Reference Bressloff and Karamched4, Reference Humpert, Di Meo, Püschel and Pietschmann19], yet additionally allowing the length of the respective neurites to change. Different to [Reference Bressloff and Karamched4] (see also [Reference Burger, Di Francesco, Pietschmann and Schlake7]), our model will have linear diffusion but non-linear transport terms. Such a model can also be justified as limit of a discrete lattice model, see [Reference Bruna, Burger, Pietschmann, Wolfram, Bellomo, Carrillo and Tadmor6, Reference Kourbane-Houssene, Erignoux, Bodineau and Tailleur20]. We are able to show that the solution stays within a given interval (usually taken to be $[0,1]$ ) so that the size exclusion property is preserved. Then, these equations which model transport inside the neurons are, as in [Reference Humpert, Di Meo, Püschel and Pietschmann19], coupled to ordinary differential equations for the evolution of the vesicle concentration at soma and tip, respectively. One of the main novelties is then to add a mechanism which allows for growth or shrinkage of the neurites depending on how many vesicles are present in the growth cones. Such free boundary models for neuron development have previously mostly been studies in the context of microtubule assembly, see [Reference Diehl, Henningsson, Heyden and Perna11, Reference Graham, Lauchlan and Mclean17, Reference McLean and Graham23]. These models focus on a single neurite in which transport of microtubules is modelled again by a linear diffusion advection equation on a domain of varying length. This is then coupled to an Ordinary Differential Equation (ODE) at one end of the domain accounting for the extension/retraction due to the microtubules. This coupling is sometimes performed via Dirichlet condition. Closer to our approach is the coupling through flux (Robin)-type boundary condition as in [Reference Bressloff and Levien5]. However, in this work, the authors only assume a linear relation for the boundary terms contrary to our study.
1.2. Contribution and outline
We make the following contributions:
-
• Based on [Reference Bressloff and Karamched4, Reference Humpert, Di Meo, Püschel and Pietschmann19], we introduce a macroscopic model for vesicle transport in developing neuron cells that includes multiple neurites, coupled with ODEs for the vesicle concentration in soma and growth cones. We use a non-linear transport mechanism to include finite size effects extending paradigm used in most of the previous models.
-
• We add a mechanism that allows for a change of neurite length depending on the respective vesicle concentration, which renders the model a free boundary problem.
-
• We rigorously prove existence and uniqueness of solutions, including box constraints corresponding to size exclusion effects due to the finite volume of vesicles.
-
• We provide a finite volume discretisation that preserves the box constraints.
-
• We perform a scaling of the model to biological reasonable regimes and then give some numerical experiments illustrating different behaviours of the model, in particular cycles of expansion and retraction as observed in experiments.
The paper is organised as follows. In Section 2, we present our model in detail. Section 3 contains some preliminaries and is then devoted to weak solutions, while Section 4 contains a brief discussion on (constant) stationary solutions. Section 5 provides a finite volume scheme, a non-dimensionalisation together with the introduction of biologically relevant scales. Section 6 is devoted to the numerical studies. Finally, Section 7 provides a brief conclusion and outlook.
2. Mathematical model
In this section, we present a mathematical model for the growth process based on the principles stated in the introduction. For the reader’s convenience, we will focus on the case of a two neurites connected to the soma, pointing out that the generalisation to multiple neurites is straightforward. For $j=1,2$ , the unknowns of our model read as follows:
-
• $L_j(t)$ denotes the length of the respective neurite at time $t$ ;
-
• $f_{+,j}(t,x)$ and $f_{-,j}(t,x)$ denote the density of motor protein complexes in the neurite $j$ that move towards the growth cone (anterograde direction) and towards the soma (retrograde direction), respectively;
-
• $\Lambda _{\text{som}}(t)$ is the amount of vesicles present in the soma at time $t$ ;
-
• $\Lambda _j(t)$ is the amount of vesicles present in the tip of each neurite at time $t$ .
The complete model consists of equations governing the dynamics inside each neurite, coupled with ODEs at the soma and growth cones, respectively, as well as with equations accounting for the change of the neurites lengths, see Figure 2 for an illustration of the couplings. We will discuss each component separately.
1. Dynamics within the neurites. Let $v_0\gt 0$ be the velocity of vesicles as they move along neurites and let $\rho _j = \rho _j(t,x) \;:\!=\; f_{+,j} + f_{-,j}$ be the total vesicle density, $j= 1, 2$ . We define the fluxes of antero- and retrogradely moving vesicle–motor complexes as:
respectively, where $D_T \gt 0$ is a positive diffusion constant. Let us emphasise again that compared to earlier models [Reference Bressloff and Levien5, Reference Friedman and Craciun15, Reference Friedman and Craciun16, Reference Newby and Bressloff24, Reference Smith and Simmons29], we include a non-linear transport term to account for finite size effects. We assume additionally that the complexes can (randomly, possibly via dissociation) change their direction with a given rate $\lambda \ge 0$ . We obtain the following drift-diffusion-reaction equations, a copy of which holds true in each neurite separately,
Here, $L_j(t)$ is the current length of the domain and $T\gt 0$ is a fixed final time. Note that the constants $v_0$ , $D_T$ and $\lambda$ do not depent on $j$ as they are related to the characteristics of the transport of vesicle–motor protein complexes which are the same in every neurite.
2. Coupling to soma and pools. We assume that all neurites are connected to the soma at the point $x=0$ . There, we have the following effects:
-
• Retrograde vesicles leave the neurite and enter the soma with rate $\beta _{-,j}(\Lambda _{\text{som}})\, f_{-,j}$ . Here, the function $\beta _{-,j}$ allows for a control of incoming vesicles in terms of the available quantity in the soma. The intuition is that the soma is less likely to allow for incoming vesicles when it already contains a larger number of them.
-
• Anterograde vesicles can leave the soma and enter the lattice with a given rate $\alpha _{+,j}(\Lambda _{\text{som}})\,g_{+,j}(f_{+,j}, f_{-,j})$ if there is enough space, that is, if $\rho _j\lt 1$ . This is ensured by assuming that the non-negative function $g_{+,j}$ satisfies $g_{+,j}(f_{+,j}, f_{-,j}) = 0$ whenever $\rho _j = f_{+,j} + f_{-,j}=1$ . The additional factor $\alpha _{+,j}(\Lambda _{\text{som}})$ reflects that the number of vesicles entering the neurite depends on the amount which is available within the soma. In particular, we ask for $\alpha _{+,j}(0)=0.$
At the point $x= L_j(t)$ , the neurite is connected to its respective pool and we have
-
• Anterograde vesicles leave the lattice and enter the pool with rate $\beta _{+,j}(\Lambda _j)\,f_{+,j}$ .
-
• Retrograde particles move from the pool into the neurite, once again only if space in the domain is available, with rate $\alpha _{-,j}(\Lambda _j)\,g_{-,j}(f_{+,j},f_{-,j})$ . Here, the functions $\beta _{+,j}$ and $\alpha _{-,j}$ serve the same purpose as $\beta _{-,j}$ and $\alpha _{+,j}$ previously, yet with pool instead of soma. Figure 2 provides a sketch of this situation. This behaviour is implemented by imposing the following flux boundary conditions (for each neurite):
(2.3) \begin{align} J_{+,j}(t,0) &= \alpha _{+,j}(\Lambda _{\text{som}}(t))\,g_{+,j}(\boldsymbol{f}_{j}(t,0)), \nonumber \\[5pt] -J_{-,j}(t,0) &= \beta _{-,j}(\Lambda _{\text{som}}(t))\,f_{-,j}(t,0), \nonumber \\[5pt] J_{+,j}(t, L_j(t))- L'_j(t)\, f_{+,j}(t, L_j(t)) &= \beta _{+,j}(\Lambda _j(t))\,f_{+,j}(t, L_j(t)), \nonumber \\[5pt] -J_{-,j}(t, L_j(t))+ L_j'(t)\, f_{-,j}(t, L_j(t)) &= \alpha _{-,j}(\Lambda _j(t))\,g_{-,j}({\boldsymbol{f}_{j}(t,L(t))}), \end{align}$j=1,2$ , for suitable functions $\alpha _{i,j},\,\beta _{i,j}$ , and $g_{i,j}$ , ${i= +,-}, j=1,2$ , whose properties will be specified later, and with the shortened notation $\boldsymbol{f}_j(\cdot \,, \cdot )\;:\!=\; (f_{+,j}(\cdot \,, \cdot ), f_{-,j}(\cdot \,, \cdot ))$ , $j= 1, 2$ . The additional terms on the left-hand side of the boundary conditions at $L_j(t)$ in (2.3) account for the mass flux of vesicles that occurs when the length of the neurite changes. They are especially important in order to keep track of the total mass in the system, see also [Reference Breden, Chainais-Hillairet and Zurek2, Reference Portegies and Peletier28] for similar constructions.
3. Dynamics of the free boundary. We assume that the length of each neurite $L_j$ satisfies the following ordinary differential equation:
where $h_j$ , $j= 1,2$ , are smooth functions to be specified. We think of $h_j$ as functions that change sign at a critical concentration of $\Lambda _j$ (i.e., switch between growth or shrinkage), which may depend on the current length of the neurite itself (e.g., in order to stop shrinkage at a minimal length).
4. Dynamics in soma and growth cones. Finally, we describe the change of number of vesicles in the soma and the respective neurite growth cones, due to vesicles entering and leaving the pools. In addition, a production term is added at the soma, while for the growth cones we add terms that model the consumption or production of vesicles due to growth or shrinkage of the neurite, respectively. We obtain
where $\chi \gt 0$ is a given parameter that has the units vesicles/length and describes the amount of vesicles needed for one unit of neurite length, while $\gamma _{\textrm{prod}}$ accounts for the amount of vesicles that are produced in the soma.
Remark 2.1. Note that, except for the influence of the growth term $\gamma _{\text{prod}}(t)$ , the total mass is preserved. It is defined by the following quantity:
where we emphasise that also the material of which the neurites are made of need to be taken into account which is done via the terms $\chi \,L_j(t)$ . Then, a formal calculation yields the following equation of the evolution of the total mass:
3. Existence of weak solutions
The aim of this section is to provide existence of a unique weak solution to the model (2.1)–(2.5). Let us first give some preliminaries.
3.1. Preliminaries
Let $L\gt 0$ and let $1 \le p\lt \infty$ . We denote by $L^p(0,L)$ and $W^{1,p}(0,L)$ the usual Lebesgue and Sobolev spaces. For $p= 2$ , we write $H^1({0,L})$ instead of $W^{1,2}({0,L})$ . Furthermore, $H^1({0,L})'$ is the dual space of $H^1({0,L})$ .
It is well known (see e.g., [Reference Adams1]) that there exists a unique linear, continuous map $\Gamma \colon W^{1,p}({0,L}) \to{\mathbb{R}}$ known as the trace map such that $\Gamma (u)= u(0)$ for all $u \in W^{1,p}({0,L}) \cap C([0,L])$ . In addition, let us recall the following trace estimate [Reference Brenner and Scott3, Theorem 1.6.6]:
Let $T\gt 0$ and let $(B, \|\cdot \|_B)$ be a Banach space. For every $1 \le r\lt \infty$ , we denote by $L^r(0,T; B)$ the Bochner space of all measurable functions $u\colon [0,T] \to B$ such that $ \|u\|_{L^r(0,T; B)}^r\;:\!=\; \int _0^T \|u(t)\|_B^r \; \,\textrm{d}t \lt \infty .$ For $r= \infty$ , the norm of the corresponding space $L^\infty (0,T; B)$ is given by $ \|u\|_{L^\infty (0,T; B)}\;:\!=\; \mathrm{ess \,sup}_{0 \le t \le T} \|u(t)\|_B.$ Finally, $C([0, T]; B)$ contains all continuous functions $u\colon [0, T] \to B$ such that
We refer to [Reference Evans14] as a reference for Bochner spaces. For every $a \in{\mathbb{R}}$ , we set $a^{\pm }\;:\!=\; \max \{\pm a, 0\}$ and for $u \in W^{1,p}({0,L})$ we define $u^\pm (\cdot )\;:\!=\; u(\cdot )^\pm$ and will use the fact that $u^\pm \in W^{1,p}(0,L)$ .
3.2. Transformation for a fixed reference domain
Before we give our definition of weak solutions, state the necessary assumptions and our main theorem, we transform (3.6) into an equivalent system set on a fixed reference domain. This facilitates the proofs and also the spaces that we need to work in. To this end, we make the following change of variables:
Then we define the functions $ \bar{f}_i(t, y)= f_i(t, x)= f_i(t, L(t) y)$ and observe that
Using (3.2), taking into account that, by the product rule, $y\, \partial _y \bar{f}_+= \partial _y(y\, \bar{f}_+)- \bar{f}_+$ and rearranging, the first equation of (2.2) reads as:
and with similar arguments:
where the fluxes are defined by:
Note that the fluxes $J_+$ and $\bar{J}_+$ are related to each other via
and a similar relation can be deduced for $J_-$ and $\bar{J}_-$ . The boundary conditions (2.3) in the reference configuration then read
The ODE (2.4) remains unchanged, while for (2.5) quantities are evaluated at $y=1$ instead of $x=L_j(t)$ , which results in
for $t\in (0,T)$ .
3.3. Notion of weak solution and existence result
We now define the notion of weak solution to our problem. Whenever not differently specified, we assume $i \in \{+,-\}$ as well as $j \in \{1,2\}$ , $k \in \{1,2, \text{som}\}$ , while $C\gt 0$ denotes a constant that may change from line to line but always depends only on the data. From now on, we always write $f_{i,j}$ instead of $\bar{f}_{i,j}$ as we always work with the equations on the reference interval.
Definition 3.1. We say that $(\boldsymbol{f}_1, \boldsymbol{f}_2, \Lambda _{\textit{som}}, \Lambda _1, \Lambda _2, L_1, L_2)$ is a weak solution to (3.3)–(3.5), (2.4) if
-
(a) $0\le f_{i,j} \le 1$ as well as $\rho _j\;:\!=\; f_{+,j}+ f_{-,j} \le 1$ a.e. in $(0,T)\times (0,1)$ ;
-
(b) $ f_{i,j} \in L^2(0,T; H^1(0, 1))$ with $\partial _t f_{i,j} \in L^2(0,T; H^1(0, 1)')$ ;
-
(c) $\boldsymbol{f}_j$ solves (3.3)–(3.4) in the following weak sense
(3.6) \begin{equation} \begin{aligned} \int _0^1 \partial _t f_+\, \varphi _+\,\mathrm dy &= \int _0^1 \frac{1}{L^2(t)} \Big [L(t)\,v_0\,f_+\,(1- \rho ) - D_T\,\partial _y f_+ - L'(t)\,L(t)\,y\,f_+\Big ]\,\partial _y \varphi _+ \\[5pt] &\quad + \Big (\lambda (f_- - f_+) -\frac{L'(t)}{L(t)}\,f_+\Big ) \,\varphi _+ \,\mathrm dy \\[5pt] & \quad + \frac{1}{L(t)}\,\alpha _+(\Lambda _{\textrm{som}}(t))\,g_+(\boldsymbol{f}(t,0))- \frac{1}{L(t)} \,\beta _+(\Lambda (t))\, f_+(t, 1) \,\varphi _+(1), \\[5pt] \int _0^1 \partial _t f_-\,\varphi _-\, \mathrm dy &= \int _0^1 \frac{1}{L^2(t)} \Big [-L(t)\,v_0\,f_-\,(1- \rho )- D_T\, \partial _y f_- - L'(t)\,L(t)\,y\, f_-\Big ]\,\partial _y \varphi _- \\[5pt] &\quad + \Big (\lambda (f_+ - f_-)-\frac{L'(t)}{L(t)} f_-\Big )\, \varphi _- \,\mathrm dy \\[5pt] & \quad -\frac 1{L(t)}\,\beta _-(\Lambda _{\textrm{som}}(t))\,f_-(t,0)\,\varphi _-(0) + \frac{1}{L(t)} \,\alpha _-(\Lambda (t))\,g_-(\boldsymbol{f}(t,1))\,\varphi _-(1), \end{aligned} \end{equation}for every $\varphi _+, \varphi _- \in H^1(0, 1)$ and almost all $t\in (0,T)$ . -
(d) $L_j(0)= L_j^0$ , $\Lambda _k(0)= \Lambda _k^0$ , and $\boldsymbol{f}_j(0, y)= \boldsymbol{f}^0_j(y)$ for almost all $y\in (0,1)$ , for suitable $L_j^0, \Lambda _k^0$ and $f_{i,j}^0$ ;
-
(e) $\Lambda _k \in{C^1([0, T])}$ solves (3.5);
-
(f) $L_j \in{C^1([0, T])}$ solves (2.4).
We next state the assumptions on the data and non-linearities which read as follows:
-
(H0) $\Lambda _k^0\gt 0$ and $L_j^0 \ge L_{\text{min}, j}$ , where $L_{\text{min}, j}\gt 0$ is given.
-
(H1) For $f_{+,j}^0, f_{-,j}^0 \in L^2(0,1)$ it holds $ f_{+,j}^0, f_{-,j}^0 \ge 0$ and $0 \le \rho _j^0 \le 1$ a.e. in $(0,1)$ , where $\rho _j^0\;:\!=\; f_{+,j}^0+ f_{-,j}^0$ .
-
(H2) The non-linearities $g_{i,j}\colon{\mathbb{R}}^2 \to{\mathbb{R}}_+$ , are Lipschitz continuous and such that $g_{i,j}(s,t)= 0$ whenever $s+t = 1$ as well as $g_{-,j}(s,0) = g_{+,j}(0,s) = 0$ for all $0 \le s \le 1$ .
-
(H3) The functions $h_j\colon{\mathbb{R}}_+ \times [ L_{\text{min}, j},+\infty ) \to{\mathbb{R}}$ are such that
-
(i) there exist non-negative functions $K_{h_j} \in L^\infty ((0,\infty ))$ such that
\begin{equation*} |h_j(t,a) - h_j(t,b)| \le K_{h_j}(t)|a-b|, \end{equation*}for all $a,\,b \in [ L_{\text{min}, j},+\infty )$ ; -
(ii) $h_j(s, L_{\text{min}, j})\ge 0$ for every $s\ge 0$ .
-
-
(H4) The functions $\alpha _{i,j}\colon{\mathbb{R}}_+ \to{\mathbb{R}}_+$ are increasing and Lipschitz continuous. Moreover, $\alpha _{-,j}(t) \ge 0$ for all $t\gt 0$ and $\alpha _{-,j}(0)= 0$ .
-
(H5) The functions $\beta _{i,j}\colon{\mathbb{R}}_+ \to{\mathbb{R}}_+$ are nonnegative and Lipschitz continuous. Moreover, there exists $\Lambda _{j, \text{max}}\gt 0$ such that $\beta _{+,j}(\Lambda _{j, \text{max}})= 0$ .
-
(H6) The parameters satisfy $ v_0, D_T, \chi \gt 0$ and $ \lambda \ge 0$ .
-
(H7) The function $\gamma _{\text{prod}} \colon{\mathbb{R}}_+ \to{\mathbb{R}}_+$ is such that $\lim _{t\to \infty } \gamma _{\text{prod}}(t) =0$ .
Remark 3.2 (Interpretation of the assumptions). Let us briefly discuss the meaning of the assumptions in terms of our application. Assumption (H $_0$ ) states that we start with a predefined number of neurites with length above a fixed minimal length and that all pools as well as the soma are non-empty. (H $_1$ ) is necessary as $f_{+,j}^0, f_{-,j}^0$ model densities and must therefore be non-negative and as we assume that there is a maximal density (due to the limited space in the neurites) normalised to $1$ . In (H $_2$ ), the regularity is needed for the analysis and only a mild restriction. The remaining requirements are necessary to ensure that all densities remain between $0$ and $1$ . (H $_3$ ) ensures that there is a lower bound for the length of the neurites meaning that neurites cannot vanish as it is observed in practice. (H $_4$ ) ensures that vesicles can only enter neurites if there are some available in growth cone or soma, respectively, while (H $_5$ ) allows the pools to decrease the rate of entering vesicles when they become too crowded. Finally, (H $_6$ ) states that diffusion, transport and reaction effects are all present at all times (yet with possible different strengths) and (H $_7$ ) finally postulates the production of vesicles within the soma. We point out that assumption (H $_7$ ) is only needed to guarantee existence of stationary solutions.
Then we can state our existence result.
3.4. Proof of Theorem 3.3
The proof of Theorem3.3 is based on a fixed point argument applied to an operator obtained by concatenating linearised versions of (3.6), (2.4) and (3.5). Let us briefly sketch our strategy before we provide the corresponding rigorous results. We work in the Banach space
endowed with the norm
Given $(\widehat{\boldsymbol{f}}_1, \widehat{\boldsymbol{f}}_2 ) \in X$ , let $\boldsymbol{\Lambda }\;:\!=\;(\Lambda _{\text{som}}, \Lambda _1, \Lambda _2) \in C^1([0,T])^3$ be the unique solution to the ODE system
We denote the mapping $(\widehat{\boldsymbol{f}}_1, \widehat{\boldsymbol{f}}_2) \mapsto \boldsymbol{\Lambda }$ by $\mathcal{B}_1$ . This $\boldsymbol{\Lambda }$ is now substituted into (2.4), that is, we are looking for the unique solution $\boldsymbol{L}=(L_1, L_2) \in C^1([0,T])^2$ to the ODE problems
and the corresponding solution operator is denoted by $\mathcal{B}_2$ . Finally, these solutions $\boldsymbol{\Lambda }$ and $\boldsymbol{L}$ are substituted into (3.6), and we look for the unique solution $\boldsymbol{f}_j \in L^2(0, T; H^1(0, 1))^2$ , with $\partial _t \boldsymbol{f}_j \in L^2(0, T; H^1(0, 1)^{\prime })^2$ , to the (still non-linear) PDE problem
for every $\varphi _+, \varphi _- \in H^1(0, 1)$ . We call the resulting solution operator $(\boldsymbol{\Lambda }, \boldsymbol{L}) \mapsto (\boldsymbol{f}_1, \boldsymbol{f}_2 )$ $\mathcal{B}_3$ . Then, given an appropriate subset $\mathcal{K} \subset X$ , we define the (fixed point) operator $\mathcal{B} \colon \mathcal{K} \to X$ as
We show that $\mathcal{B}$ is self-mapping and contractive, so that existence is a consequence of the Banach’s fixed point theorem.
Let us begin with system (3.7).
Lemma 3.4. Let $(\widehat{\boldsymbol{f}}_1, \widehat{\boldsymbol{f}}_2) \in X$ , then, there exists a unique $\boldsymbol{\Lambda }= (\Lambda _{\textit{som}}, \Lambda _1, \Lambda _2) \in C^1([0, T])^3$ that solves (3.7) with initial conditions
Proof. This result is an application of the Cauchy-Lipschitz theorem, since the right-hand sides of (3.7) are Lipschitz continuous with respect to $\Lambda _k$ thanks to hypotheses (H $_4$ ) and (H $_5$ ).
Lemma 3.5. Let $\mathcal{B}_1\colon X \to C([0, T])^3$ be the operator that maps $(\widehat{\boldsymbol{f}}_1, \widehat{\boldsymbol{f}}_2) \in X$ to the solution $\boldsymbol{\Lambda }$ to (3.7). Then, $\mathcal{B}_1$ is Lipschitz continuous.
Proof. From Lemma3.4, $\mathcal{B}_1$ is well defined. Let now $(\widehat{\boldsymbol{f}}_1^{(1)},\widehat{\boldsymbol{f}}_2^{(1)}),$ $(\widehat{\boldsymbol{f}}_1^{(2)}, \widehat{\boldsymbol{f}}_2^{(2)} ) \in X$ and let $\boldsymbol{\Lambda }^{(1)}=: \mathcal{B}_1(\widehat{\boldsymbol{f}}_1^{(1)}, \widehat{\boldsymbol{f}}_2^{(1)})$ and $\boldsymbol{\Lambda }^{(2)}=: \mathcal{B}_1(\widehat{\boldsymbol{f}}_1^{(2)}, \widehat{\boldsymbol{f}}_2^{(2)})$ be solutions to (3.7) satisfying the same initial condition (3.11). We fix $t \in [0, T]$ and consider
$a=1,2$ . Taking the difference of the two equations, setting $\delta \Lambda _j\;:\!=\; \Lambda _j^{(1)}- \Lambda _j^{(2)}$ , exploiting hypotheses (H $_2$ )–(H $_5$ ) and summarising the constants give
while the trace inequality (3.1) and a Gronwall argument imply
A similar argument holds for the equation for $\Lambda _{\text{som}}$ , and we eventually have
We next show the following existence result for equation (3.8).
Lemma 3.6. Let $\boldsymbol{\Lambda } \in C^1([0, T])^3$ be the unique solution to (3.7). Then, there exists a unique $\boldsymbol{L}= (L_1, L_2) \in C^1([0, T])^2$ that solves (3.8) with initial condition $ L_j(0)= L_j^{(0)}$ . Furthermore, $\text{for all } t \in (0, T)$ it holds
Proof. The existence and uniqueness follow as before. The lower bound in (3.13) can be deduced by applying TheoremA.1 ( [Reference Deimling10, Theorem 5.1]) in the appendix with $X={\mathbb{R}}$ , $\Omega = [L_{\text{min}, j},\infty )$ and $f=h_j$ . Assumption (A1) in the theorem is satisfied as, due to (H $_3)$ , the choice $\omega = K_{h_j}$ fulfils the requirements. For (A2), we note that the unit outward normal of the set $[L_{\text{min}, j},\infty )$ at $L_{\text{min},j}$ is $-1$ and that $h_j(s, L_{\text{min}, j})\ge 0$ for every $s\ge 0$ (again by (H $_3$ )). This yields $(h_j(s, L_{\text{min}, j}),-1) = -h_j(s, L_{\text{min}, j}) \le 0$ as needed. In order to prove the upper bound in (3.13), we fix $t \in (0,T)$ , integrate (3.8) and use (H $_3$ )-(i) to have
In addition, we observe that (3.8) gives $|L_j'(t)| \le \|h_j\|_{L^{\infty }({\mathbb{R}}^2)}$ . Then, the fact that $L_j(t) \ge L_{\text{min}, j}$ allows us to conclude (3.14).
Lemma 3.7. The operator $\mathcal{B}_2 \colon C([0, T])^3 \to C([0, T])^2$ that maps $\boldsymbol{\Lambda }$ to the solution $\boldsymbol{L}$ to (3.8) is Lipschitz continuous in the sense of
where $L_{h_j}$ is the Lipschitz constant of $h_j$ . If $T \,\max _{j= 1,2}\bigl \{L_{h_j}\, e^{2\,T\,L_{h_j}} \bigr \}\lt 1$ , then $\mathcal{B}_2$ is contractive.
Proof. The proof works as for Lemma3.5 so we omit it.
We next investigate the existence of solutions to system (3.9)–(3.10).
Theorem 3.8. Let $\boldsymbol{\Lambda }$ and $ \boldsymbol{L}$ be the unique solution to (3.7) and (3.8), respectively. Then, there exists a unique solution $(\boldsymbol{f}_1, \boldsymbol{f}_2) \in X$ to (3.9)–(3.10) such that $f_{i,j}(t,y)\in [0,1]$ for a.e. $y \in (0, 1)$ and $t \in [0, T]$ .
Proof. To simplify the notation, in this proof we will drop the use of the $j$ -index and, for the reader’s convenience, we split the proof in several steps.
Step 1: Approximation by truncation. Given a generic function $a$ we introduce the truncation
We apply this to the non-linear transport terms $f_{\pm }\,(1-\rho )$ in (3.6) which yields (after summing up)
We solve (3.17) by means of the Banach fixed point theorem. We follow [Reference Egger, Pietschmann and Schlottbom12], pointing out that a similar approach has been used also in [Reference Marino, Pietschmann and Pichler22], yet in a different context.
Let us set $Y\;:\!=\; (L^{\infty }((0,T); L^2(0,1)))^2$ and introduce the following nonempty, closed set
with $T, C_{\mathcal{M}}\gt 0$ to be specified. Then we define the mapping $\Phi \colon \mathcal{M} \to Y$ such that $\Phi (\widetilde{\boldsymbol{f}})={\boldsymbol{f}}$ where, for fixed $\widetilde{\boldsymbol{f} }\in \mathcal{M}$ , $\boldsymbol{f}$ solves the following linearised equation (cf. [Reference Lieberman21, Chapter III])
$\underline{\textrm{Step 2:}\; \Phi\; \textrm{is self-mapping.}}$ We show that
We choose $\varphi _i= f_i$ in (3.18) and estimate the several terms appearing in the resulting equation separately. From (3.13), on the left-hand side we have
On the right-hand side we first use equation (3.14) along with Young’s inequality for some $\varepsilon _1\gt 0$ and the fact that $y \in (0,1)$ to achieve
while (3.14) once again gives
On the other hand, (3.13), Young’s inequality for some $\varepsilon _2\gt 0$ and (3.16) give
We further observe that
Finally we estimate the boundary terms. We use hypotheses (H $_2$ ), (H $_4$ ), (H $_5$ ) and equation (3.13) together with Young’s inequality with some $\varepsilon _3, \dots, \varepsilon _6\gt 0$ and the trace inequality (3.1) to achieve
We choose $\varepsilon _{\kappa }$ , $\kappa =1, \dots, 6$ , in such a way that all the terms of the form $\|\partial _y f_i\|_{L^2(0,1)}$ can be absorbed on the left-hand side of (3.18), which simplifies to
We then use a Gronwall argument to infer
This implies that (3.19) is satisfied and therefore $\Phi$ is self-mapping.
$\underline{\textrm{Step 3:}\; \Phi \textrm{is a contraction.}}$ Let $\widetilde{\boldsymbol{f}}_1, \widetilde{\boldsymbol{f}}_2 \in \mathcal{M}$ and let ${\boldsymbol{f}_1}=: \Phi (\widetilde{\boldsymbol{f}}_1)$ and ${\boldsymbol{f}_2}=: \Phi (\widetilde{\boldsymbol{f}}_2)$ be two solutions to (3.18) with the same initial datum $\boldsymbol{f}^0$ . We then consider the difference of the corresponding equations and choose $\varphi _i= f_{i, 1}- f_{i, 2}$ .
Reasoning as in Step 2 and exploiting the Lipschitz continuity of the functions
we get
Again by means of a Gronwall argument we have
and then $\Phi$ is a contraction if $T\gt 0$ is small enough so that $ C\, T\, e^{C\, T}\lt 1$ . Then Banach’s fixed point theorem applies and we obtain a solution $\boldsymbol{f} \in (L^{\infty }(0,T; L^2(0,1)))^2$ to (3.17). A standard regularity theory then gives $\boldsymbol{f} \in (L^2(0,T; H^1(0,1)))^2$ , with $\partial _t \boldsymbol{f} \in (L^2(0,T; H^1(0,1)')^2$ .
Step 4: Box constraints. We show that such $\boldsymbol{f}$ obtained in Step 4 is actually a solution to (3.6), because it satisfies the box constraint $ f_+, f_- \ge 0$ and $\rho \le 1$ .
We start by showing that $f_+ \ge 0$ , and to this end we consider only the terms involving the $\varphi _+$ -functions in (3.17), that is,
For every $\varepsilon \gt 0$ we consider the function $\eta _\varepsilon \in W^{2, \infty }({\mathbb{R}})$ given by
Next we choose $\varphi _+= -\eta _\varepsilon '(\!-\!f_+)$ and observe that by the chain rule we have $\partial _y \varphi _+= \eta _\varepsilon ''(\!-\!f_+)\,\partial _y f_+$ . Using such $\varphi _+$ in (3.21) gives
Thanks to Young’s inequality with a suitable $\kappa \gt 0$ and to (3.16) we have
as well as
where $\kappa$ depends on the lower and upper bounds on $L(t)$ , see Lemma3.6. Choosing $\kappa$ sufficiently small and taking into account that the term involving $\alpha _+$ is non-negative we obtain
To gain some sign information on the right-hand side of (3.23), we introduce the set
We first use (3.13), (3.22), and the Lipschitz continuity of (3.20) to have
as well as
and
On the other hand, from (3.24) it follows that $-2\varepsilon \lt f_+- f_- \le 2\varepsilon$ , which implies
Concerning the boundary term, we note that by definition the function $\eta _\varepsilon '$ is nonzero only if its argument is nonnegative, which in any case gives
Summarising, from (3.23) we have
which implies
where the last equality holds thanks to assumption (H $_1$ ). It follows that $f_+ \ge 0$ . The proof that $f_- \ge 0$ works in a similar way, starting again from equation (3.17) and taking into account only the terms involving the $\varphi _-$ -functions.
We now show that $\rho \le 1$ and to this aim start from (3.17) with $\varphi _i\;:\!=\; \varphi$ , for $i=+,-$ . This gives
We choose $\varphi = \eta _\varepsilon '(\rho -1)$ , $\widetilde{\Omega }_\varepsilon = \{y \in (0,1): \, 1 \le \rho (t, y) \le 1+ 2\varepsilon \}$ , and reason as before exploiting hypothesis (H $_2$ ). Then the limit as $\varepsilon \to 0$ entails
thanks again to (H $_1$ ). It follows that $\rho \le 1$ , then the claim is proved.
Step 5: Conclusion. Using the original notation, we have shown the existence of a solution $\bar{\boldsymbol{f}}$ to (3.6) such that $\bar{f}_+, \bar{f}_- \ge 0$ and $\bar{\rho } \le 1$ for a.e. $y \in (0,1)$ , $t \in [0, T]$ . This implies that $\boldsymbol{f}$ is a weak solution to (3.9)–(3.10) such that $f_+, f_- \ge 0$ and $\rho \le 1$ for a.e. $x \in (0, L(t))$ , $t \in [0, T]$ . The system is completely solved.
Lemma 3.9. The operator $\mathcal{B}_3 \colon C([0, T])^3 \times C([0, T])^2 \to X$ , which maps a pair $(\boldsymbol{\Lambda }, \boldsymbol{L})$ to the unique solution $(\boldsymbol{f}_1, \boldsymbol{f}_2)$ of (3.9)–(3.10), is Lipschitz continuous.
Proof. Thanks to Theorem3.8, $\mathcal{B}_3$ is well defined. We choose $\boldsymbol{\Lambda }^{(a)} \in C([0,T])^3$ and $\boldsymbol{L}^{(a)} \in C([0, T])^2$ and set $(\boldsymbol{f}_1^{(a)}, \boldsymbol{f}_2^{(a)} )\;:\!=\; \mathcal{B}_3(\boldsymbol{\Lambda }^{(a)}, \boldsymbol{L}^{(a)})$ , $a=1,2$ . Then we start from equations (3.9), (3.10) for the respective copies, take the corresponding differences and use a Gronwall inequality to achieve
where the constant $\overline C$ is shown to be uniform.
Proof of Theorem 3.3. We use the Banach fixed point theorem to show the existence of a unique solution $(\boldsymbol{f}_1, \boldsymbol{f}_2, \boldsymbol{\Lambda }, \boldsymbol{L})$ to (2.1)–(2.5). We consider the set
Let $\mathcal{B} \colon \mathcal{K} \to X$ be given by $\mathcal{B}(\boldsymbol{f}_1, \boldsymbol{f}_2) = \mathcal{B}_3(\mathcal{B}_1(\boldsymbol{f}_1, \boldsymbol{f}_2), \mathcal{B}_2(\mathcal{B}_1(\boldsymbol{f}_1, \boldsymbol{f}_2)))$ , where $\mathcal{B}_1, \mathcal{B}_2, \mathcal{B}_3$ are defined in Lemmas3.5, 3.7, and 3.9, respectively. Thus, $\mathcal{B}$ is well defined, while Theorem3.8 implies that $\mathcal{B}$ is self-mapping, that is, $\mathcal{B}\colon \mathcal{K} \to \mathcal{K}$ .
We next show that $\mathcal{B}$ is a contraction. Let $(\widehat{\boldsymbol{f}}_1^{(1)}, \widehat{\boldsymbol{f}}_2^{(1)}), (\widehat{\boldsymbol{f}}_1^{(2)}, \widehat{\boldsymbol{f}}_2^{(2)}) \in \mathcal{K}$ and set $( \boldsymbol{f}_1^{(1)}, \boldsymbol{f}_2^{(1)})\;:\!=\; \mathcal{B}(\widehat{\boldsymbol{f}}_1^{(1)}, \widehat{\boldsymbol{f}}_2^{(1)})$ as well as $(\boldsymbol{f}_1^{(2)}, \boldsymbol{f}_2^{(2)})\;:\!=\; \mathcal{B}(\widehat{\boldsymbol{f}}_1^{(2)}, \widehat{\boldsymbol{f}}_2^{(2)})$ . Using (3.25), (3.15) and (3.12) and summarising the not essential constants gives
from which the contractivity follows if $T\gt 0$ is small enough that $C T e^{CT}\lt 1$ . Thanks to the Banach’s fixed point theorem, we then infer the existence of a unique $( \boldsymbol{f}_1, \boldsymbol{f}_2) \in \mathcal{K}$ such that $(\boldsymbol{f}_1, \boldsymbol{f}_2)= \mathcal{B}( \boldsymbol{f}_1, \boldsymbol{f}_2)$ . From (3.12), this implies the uniqueness of $\boldsymbol{\Lambda }$ , too. Due to the uniform $L^\infty$ -bounds on $\boldsymbol{f}_j$ , a concatenation argument yields existence on the complete interval $[0,T]$ . The proof is thus complete.
4. Stationary solutions
This section is dedicated to a brief investigation of stationary solutions $(\boldsymbol{f}_1^{\infty }, \boldsymbol{f}_2^{\infty }, \boldsymbol{\Lambda }^{\infty }, \boldsymbol{L}^{\infty })$ to (2.1)–(2.5). That is, they satisfy
with boundary conditions
while $\boldsymbol{\Lambda }^\infty$ and $\boldsymbol{L}^\infty$ solve
respectively. Notice that we assumed $\gamma _{\text{prod}}(t) \to 0$ as $t\to \infty$ , cf. hypothesis (H $_7$ ), in order to guarantee a finite total mass of the stationary state (which, in turn, results in finite values of $L_j^\infty$ ). From the modelling point of view, this means that at the end of the growth phase, when the neuron is fully developed, there is no more production of vesicles in the soma. In addition to equations (4.1)–(4.3), stationary solutions are parametrised by their total mass:
For fixed $0\lt m_\infty \lt +\infty$ , we expect three possible types of stationary solutions (where the upper bound on $m_\infty$ excludes neurites of infinite length):
-
• No mass inside the neurites, that is, $\rho _j^\infty = 0$ , and $m_\infty = \Lambda _1^\infty + \Lambda _2^\infty + \chi \,L_1^\infty + \chi \,L_2^\infty + \Lambda _{\text{som}}^\infty$ . This solution is always possible as it automatically satisfies (4.2). The length depends on the fraction of mass stored in each $\Lambda _j$ , which yields a family of infinite solutions.
-
• Constant solutions with mass inside the neurites, that is, $\boldsymbol{f}_j^\infty \neq (0,0)$ . In this case, for $\lambda \gt 0$ , the reaction term enforces $f_{-,j}^\infty = f_{+,j}^\infty =: f_j^\infty$ . However, such solutions only exist if the non-linearities at the boundary satisfy conditions so that (4.2) holds. In this case, compatibility with a given total mass $m_\infty$ can be obtained by adjusting the concentration $\Lambda _{\text{som}}^\infty$ , which decouples from the remaining equations.
-
• Non-constant solutions, featuring boundary layers at the end of the neurites.
A natural question is the existence of non-constant stationary solutions as well as their uniqueness and stability properties which we postpone to future work. Instead, we focus on conditions for the existence of non-trivial constant solutions. Let us note, however, that even if the biological system reaches a stationary state in terms of the length of the neurites, we would still expect a flux of vesicles through the system (i.e., the system will still be out of thermodynamic equilibrium). Thus, the fluxes at the boundary would be non-zero and one would expect non-constant stationary solutions even in this case.
4.1. Constant stationary solutions
We assume a strictly positive reaction rate $\lambda \gt 0$ which requires $f_{+,j} = f_{-,j} =: f_j^\infty \in [0,1]$ . Assuming in addition $f_j^\infty = \textrm{const}.$ , (4.1) is automatically satisfied (note that $(L_j^\infty )'=0$ ). Thus, the actual constants are determined via the total mass, the stationary solutions to the ODEs and the boundary couplings, only, where the fluxes take the form:
Making the choice
the boundary conditions (4.2) become
which together with (4.4) yield
Thus, fixing the values of $f_j^\infty$ , we obtain $\Lambda _j^\infty$ , $\Lambda _{\text{som}}^\infty$ by inverting $\alpha _{+,j}$ , $\alpha _{-,j}$ , $\beta _{+,j}$ , and $\beta _{-,j}$ , respectively, together with the compatibility conditions:
We further observe that the equations in (4.3) are the differences of in- and outflow at the respective boundaries, which in the case of constant $f_j^\infty$ -s read as:
It turns out that (4.7) will be automatically satisfied as soon as (4.6) and (4.3) hold. In particular, the last equation in (4.3) will determine the values of $L_j^\infty$ . We make the choices
for some $c_{\beta _{i,j}}, c_{\alpha _{i,j}} \ge 0$ , with $\Lambda _{\text{som}, \text{max}}$ and $\Lambda _{j,\text{max}}$ being the maximal capacity of soma and growth cones. Then, for given $f_j^\infty \in (0,\frac{1}{2}]$ to be a stationary solution, we need
The interesting question of stability of these states will be treated in future work.
5. Finite volume scheme and scaling
5.1. Finite volume scheme
We now present a computational scheme for the numerical solution of model (2.1)–(2.5). The scheme relies on a spatial finite volume discretisation of the conservation law (2.2) and adapted implicit–explicit time-stepping schemes. Starting point for the construction of a discretisation is the transformed equation (3.3). First, we introduce an equidistant grid:
of the interval $(0,1)$ and define control volumes $I_k \;:\!=\; (y_{k-1/2},y_{k+1/2})$ , $k=0,\ldots, n_e$ . The mesh parameter is $h=y_{k+1/2}-y_{k-1/2} = (n_e+1)^{-1}$ . The cell averages of the approximate (transformed) solution are denoted by:
$i\in \{+,-\}$ , $j\in \{1,2\}$ . To shorten the notation, we omit the vesicle index $j\in \{1,2\}$ in the following. Integrating (3.3) over an arbitrary control volume $I_k$ , $k=0, \ldots, n_e$ , yields
We denote the convective flux by $\textbf{v}_+(t,y) \;:\!=\; v_0\,(1-\rho )-L'(t)\,y$ and use a Lax-Friedrichs approximation at the end points of the control volume:
with $\{\{\cdot\}\}$ and $[\![\!\cdot\!]\!]$ denoting the usual average and jump operators. For the diffusive fluxes, we use an approximation by central differences:
For the inner intervals $I_k$ with $k \in \{1,\ldots, n_e-1\}$ , this gives the equations:
while for $k=0$ and $k=n_e$ , we insert the boundary conditions (3.4) to obtain
almost everywhere in $(0,T)$ . In the same way, we deduce a semi-discrete system for $\bar{f}_-^k$ , $k=0,\ldots, n_e$ , taking into account the corresponding boundary conditions from (3.4).
To treat the time dependency, we use an implicit–explicit time-stepping scheme. We introduce a time grid $t_n = \tau \,n$ , for $n=0,\ldots, n_t$ , and for some time-dependent function $g\colon [0,T]\to X$ we use the notation $g(t_n) =: g^{(n)}$ . To deduce a fully discrete scheme, we replace the time derivatives in (5.1) by a difference quotient $\partial _t\bar{f}_{+}(t_{n+1}) \approx \tau ^{-1}(\bar{f}_+^{(n+1)}-\bar{f}_+^{(n)})$ and evaluate the remaining terms related to diffusion in the successive time point $t_{n+1}$ and all convection and reaction related terms in the current time point $t_n$ . This yields a system of linear equations of the form:
with vector of unknowns $\vec f_\pm ^{(n)} = (f_\pm ^{0,(n)}, \ldots, f_\pm ^{n_e,(n)})^\top$ , mass matrix $M$ , diffusion matrix $A$ , and a vector $\vec b_\pm ^{(n)}$ summarising the convection related terms and another vector for the reaction related terms $\vec c_\pm ^{(n)}$ . In the same way, we deduce equations for the discretised ordinary differential equations (2.5) and (2.4) which correspond to a standard backward Euler discretisation:
Equations (5.2)–(5.5) even decouple. One after the other, we can compute
for $k\in \{1,2,\text{som}\}$ and $j\in \{1,2\}$ .
5.2. Non-dimensionalisation of the model
To transform the model to a dimensionless form, we introduce a typical time scale $\tilde{t}$ , a typical length $\tilde{L}$ , etc., and dimensionless quantities $\bar{t},\, \bar{L}$ such that $t = \tilde{t} \bar{t}$ , $L = \tilde{L} \bar{L}$ . This is performed on the original system from Section 2, not on the one transformed to the unit interval, as we want to work with appropriate physical units for all quantities, including the length of the neurites. Realistic typical values are taken from [Reference Humpert, Di Meo, Püschel and Pietschmann19] (see also [Reference Pfenninger27, Reference Tsaneva-Atanasova, Burgo, Galli and Holcman32, Reference Urbina, Gomez and Gupton34]) which yield the following choices: the typical length is $\tilde{L} = 25 \,\mu \text{m}$ , the typical time is $\tilde{t} = 7200 \,\text{s}$ , the diffusion constant is $D_T = 0.5\, \frac{\mu \text{m}^2}{\text{s}}$ , and the velocity is $\tilde{v}_0 = 50 \frac{\mu \text{m}}{\text{min}} = \frac 56\,\frac{\mu \text{m}}{\text{s}}$ . For the reaction rate, we assume $\tilde{\lambda } = \frac{1}{s}$ . The typical influx and outflow velocity is $\tilde{\alpha } = \tilde{\beta } = 0.4\, \frac{\mu \text{m}}{\text{s}}$ . Finally, we choose a typical production of $\tilde{\gamma } = 10\, \text{vesicles}/\text{sec}$ .
The remaining quantities to be determined are the maximal density of vesicles inside the neurites, the factor which translates a given number of vesicles with length change of the neurite and the maximal capacity of soma and growth cones.
Maximal density
We assume the neurite to be tube-shaped, pick a circular cross section at an arbitrary point and calculate the maximal number of circles having the diameter of the vesicles that fit the circle whose diameter is that of the neurite. In this situation, hexagonal packing of the smaller circles is optimal, which allows to cover about $90\,\%$ of the area (neglecting boundary effects). As the typical diameter of one vesicle is $130\,\textrm{nm}$ and the neurite diameter is 1 $\mu$ m, we obtain the condition:
which implies $n_{\text{max}} \le 65$ . Now for a tube segment of length $1\,\mathrm{\mu m}$ , one can stack $7$ fully packed cross-sectional slices, each of which has the diameter of the vesicles, that is, $130\,\textrm{nm}$ . This results in a maximal density of $455 \frac{\text{vesicles}}{\mu m}$ . As the neurite also contains microtubules and as an optimal packing is biologically unrealistic, we take one-third of this value as maximal density, which yields $ \rho _{\text{max}} = 155 \frac{\text{vesicles}}{\mathrm{\mu m}}.$ The typical density of anterograde and retrograde particles is fixed to $\tilde{f} \;:\!=\; \tilde{f}_+= \tilde{f}_- = 39\, \frac{\text{vesicles}}{\mu \text{m}}$ so that their sum corresponds to a half-filled neurite. Thus, for the scaled variables $\bar{f}_+, \,\bar{f}_-$ , their sum being $\bar{\rho }= \bar{f}_+ + \bar{f}_- = 2$ corresponds to a completely filled neuron. This implies that the term $1- \rho$ has to be replaced by $1-\frac{\bar{\rho }}{2}$ .
Vesicles and growth
We again consider the neurite as a cylinder with a diameter of $1\,\mu \textrm{m}$ . Thus, the surface area of a segment of length $1\,\mu \textrm{m}$ is $ A_{\text{surf}} = 2\pi \frac{1\,\mu \textrm{m}}{2}1\,\mu \textrm{m} \approx 3.14\, (\mu \textrm{m})^2.$ We consider vesicles of $130$ nm diameter, which thus possess a surface area of $0.053$ $(\mu \textrm{m})^2$ . Thus, the number of vesicles needed for an extension of $1\,\mu \textrm{m}$ is
Maximal capacities and minimal values
Finally, we fix the maximal amount of vesicles in the pools and soma to $\Lambda _{\text{som,max}} = 6000\,\text{vesicles}$ and $\Lambda _{j,\text{max}}= 400\,\text{vesicles}$ and choose the typical values $\tilde{\Lambda }_{\text{som}}$ and $\tilde{\Lambda }_{\text{cone}}$ as half of the maximum, respectively. It remains to fix the minimal length of each neurite as well as the number of vesicles in the growth cone which defines the switching point between growth and shrinkage. We choose a minimal length of $5\,\mu \textrm{m}$ , while the sign of $h_j$ changes when the number of vesicles in the growth cone reaches a value of $100\,\text{vesicles}$ . This yields the dimensionless quantities $ \bar{\Lambda }_{\text{growth}} = 1, \; \bar{L}_{\text{min}} = 0.1.$ Applying the scaling, model (2.1)–(2.5) transforms to
$\text{in } (0, T) \times (0, \bar{L}_j(t))$ , with dimensionless parameters $ \kappa _v = \frac{v_0 \tilde{t} }{\tilde{L}}, \; \kappa _D = D_T\frac{\tilde{t} }{\tilde{L}^2}, \; \kappa _\lambda = \tilde{t} \tilde{\lambda },$ and with boundary conditions (keeping the choices (4.5), (4.8), (4.9)):
with $\kappa _{\alpha _{+,j}} = \frac{\tilde{t}}{\tilde{L}} c_{\alpha _{+,j}}, \kappa _{\alpha _{-,j}} = \frac{\tilde{t}}{\tilde{L}} c_{\alpha _{-,j}}, \kappa _{\beta _{+,j}} = \frac{\tilde{t}}{\tilde{L}} c_{\beta _{+,j}}, \kappa _{\beta _{-,j}} = \frac{\tilde{t}}{\tilde{L}} c_{\beta _{-,j}}$ . It remains to fix the values of the constants appearing in the functions $\alpha _{\pm, j},\,\beta _{\pm, j}$ . As they correspond to velocities, we fix them to the typical in-/outflux velocity:
For the soma and the growth cones, we choose half of the maximal amount of vesicles as typical values, that is, $\tilde{\Lambda }_{\text{som}} = 3000\,\text{vesicles}$ , $\tilde{\Lambda }_{j} = 200\,\text{vesicles}$ , $j=1,2$ . We obtain
with $ \kappa _{\text{som}} = \frac{\tilde{t}}{\tilde{\Lambda }_{\text{som}}}\tilde{c} \tilde{f}, \kappa _\gamma = \tilde{\gamma } \frac{\tilde{t}}{\tilde{\Lambda }_{\text{som}}}, \kappa _{\text{cone}} = \frac{\tilde{t}}{\tilde{\Lambda }_{\text{cone}}}\tilde{c} \tilde{f}, \kappa _{h} = \frac{\tilde{t}}{\tilde{\Lambda }_{\text{cone}}} \tilde{h} \chi .$ Finally, for the scaled production function $\bar{h}$ in (2.4), we make the choice $\bar{h}_j(\bar{\Lambda }, \bar{L}) = \operatorname{atan}(\bar{\Lambda } - \bar{\Lambda }_{\text{growth}})H(\bar{L} - \bar{L}_{\text{min}})$ , $j=1,2$ , where $H$ is a smoothed Heaviside function. We have
6. Numerical studies
We present two examples that demonstrate the capability of our model to reproduce observations in biological systems. Both start with an initial length difference of the two neurites. The first example shows that the shorter neurite can become the longer one due to a local advantage of the number of vesicles present in the growth cone, while the second showcases oscillatory behaviour in neurite lengths that is observed experimentally. Both simulations are performed in Matlab, using the finite volume scheme introduced in Section 5.1, using the parameters $\eta = 10$ , $n_e = 100$ and $\tau = 1\mathrm{e}{-4}$ . We chose $T = 9$ (corresponding to $18$ hours) as a maximal time of the simulation, yet when a stationary state is reached before (measured by the criterium $\|f_{\pm, j}^{(n+1)} - f_{\pm, j}^{(n)}\|_2 \le 1\mathrm{e}{-9}$ ), the simulation is terminated. We also set $\lambda = 0$ in all simulations.
6.1. Fast growth by local effects
The first example shows that an initial length deficiency of a neurite can be overcome by a local advantage of vesicles on the growth cone. In this set-up, we fixed (all scaled quantities) the following initial data: $L_1^0 = 1.1$ , $L_2^0 = 0.9$ , $\Lambda _{\text{soma}}^0 = 1$ , $\Lambda _1^0= 0.65$ , $\Lambda _2^0= 1.15$ , $\boldsymbol{f}_1^0 = \boldsymbol{f}_2^0 = (0.2,0.2)$ . Furthermore, the functions
were chosen in this example. The choices of $\alpha _+$ and $\alpha _-$ are motivated as follows: both are proportional to $\Lambda / 2$ as the typical values within the soma and growth cones, respectively, are chosen to be half the maximum which means that $\Lambda / 2 = 1$ if this value is reached. Thus, relative to their capacity, the outflow rates from soma and growth cone behave similarly. Now, since we are interested in the effect of vesicle concentration in the respective growth cones, we chose a small constant in $\alpha _+$ in order to limit the influence of new vesicles entering from the soma, relative to $\alpha _-$ . This is a purely heuristic choice to examine if such a local effect can be observed in our model at all. Figure 3 shows snapshots of the simulation at different times, while Figure 4a shows the evolution of neurite lengths and vesicle concentrations over time. The results demonstrate that the local advantage of a higher vesicle concentration in the growth cone of the shorter neurite is sufficient to outgrow the competing neurite. Yet, this requires a weak coupling in the sense that the outflow rate at the soma is small, see the constant $0.05$ in $\alpha _+$ . Increasing this value, the local effect does not prevail and indeed, the longer neurite will always stay longer while both neurites grow at a similar rate as shown in Figure 4b. Thus, we consider this result as biologically not very realistic, in particular as it cannot reproduce cycles of extension and retraction that are observed in experiments.
6.2. Oscillatory behaviour due to coupling of soma outflow rates to density of retrograde vesicles
In order to overcome the purely local nature of the effect in the previous example, it seems reasonable to include effects that couple the behaviour at the growth cones to that of the soma via the concentrations of vesicles in the neurites. We propose the following two mechanisms: first, we assume that a strongly growing neurite is less likely to emit a large number of retrograde vesicles as it wants to use all vesicles for the growth process. In addition, we assume that the soma aims to reinforce strong growth and is doing so by measuring the density of arriving retrograde vesicles. The lower it becomes, the more anterograde vesicles are released. Such behaviour can easily be incorporated in our model by choosing
The remaining functions are defined as in Section 6.1. The initial data in this example are $L_1^0 = 1.1$ , $L_2^0 = 1$ , $\Lambda _{\textrm{som}}^0 = 1$ , $\Lambda _{1}^0=\Lambda _2^0 = 0.9$ and $f_{\pm, 1}^0 = f_{\pm, 2}^0 = 0$ .
The results are presented in Figures 5 (snapshots) and 6 and are rather interesting: first, it is again demonstrated that the shorter neurite may outgrow the larger one. Furthermore, as a consequence of the non-local coupling mechanism, the model is able to reproduce the oscillatory cycles of retraction and growth that are sometimes observed, see for example, [Reference Cooper9, Reference Winans, Collins and Meyer35]. Also the typical oscillation period of 2–4 hours observed in [Reference Winans, Collins and Meyer35, Figure 1] can be confirmed in our computation. Finally, the model predicts one neurite being substantially longer than the other which one might interpret as axon and dendrite.
7. Conclusion & outlook
We have introduced a free boundary model for the dynamics of developing neurons based on a coupled system of partial and ordinary differential equations. We provided an existence and uniqueness result for weak solutions and also presented a finite volume scheme in order to simulate the model. Our results show that the model is able to reproduce behaviour such as retraction–elongation cycles on scales comparable to those observed in experimental measurements as shown in Section 6.2. On the other hand, the numerical results show that the density of vesicles within the neurites is, for most of the time, rather small. Thus, the effect of the non-linear transport term that we added may be questioned and indeed, rerunning the simulations with linear transport yields rather similar results. It remains to be analysed if such low vesicle densities are biologically reasonable and thus offer the opportunity to simplify the model.
A further natural question that arises at this point is what can be learned form these results. We think that while the transport mechanisms within the neurites as well as the growth and shrinkage are reasonable and fixed (up to the discussion about linear vs. non-linear transport above), most of the behaviour of the model is encoded in the coupling via the boundary conditions. These, on the other hand, allow for a large variety of choice out of which it will be difficult to decide which is the one actually implemented in a real cell. Thus, as a next step for future work, we propose to consider these couplings as unknown parameters that need to be learned using experimental data that come from experiments. We are confident that this will allow to identify possible interactions between soma and growth cones and will give new insight into the actual mechanisms at work.
Finally, we remark that as our model only focuses on the role of vesicle transport, many other effects are neglected and clearly our approach is nowhere near a complete description of the process of neurite outgrowth. To this end, we plan to extend our model further in the future, adding effects such as the role of microtubule assembly as well as chemical signals, which are neglected so far.
Acknowledgements
GM acknowledges the support of DFG via grant MA 10100/1-1 project number 496629752, JFP via grant HE 6077/16-1 (eBer-22-57443). JFP thanks the DFG for support via the Research Unit FOR 5387 POPULAR, Project No. 461909888. We thank Andreas Püschel (WWU Münster) for valuable discussions on the biological background. In addition, we are very grateful to the comments of the anonymous referees.
Competing interests
The author(s) declare none.
Appendix A
For convenience of the reader, we state [Reference Deimling10, Theorem 5.3] about invariant regions of solutions of ODEs.
Theorem A.1. Let $X$ be a real normed linear space, $\Omega \subset X$ an open set and $D \subset X$ a distance set with $D \cap \Omega \neq \emptyset$ . Let $f\;:\;(0, a) \times \Omega \rightarrow X$ be such that
-
(A1)
\begin{equation*} \begin {aligned} & (f(t, x)-f(t, y), x-y)_{+} \leq \omega (t,|x-y|)|x-y| \\[5pt] & \quad \text { for } x \in \Omega \backslash D, y \in \Omega \cap \partial D, t \in (0, a), \end {aligned} \end{equation*}where $\omega :(0, a) \times \mathbb{R}^{+} \rightarrow \mathbb{R}$ is such that $p(t) \leq 0$ in $(0, \tau ) \subset (0, a)$ whenever $\rho :[0, \tau ) \rightarrow \mathbb{R}^{+}$ is continuous, $\rho (0)=0$ and $D^{+} \rho (t) \leq \omega (t, \rho (t))$ for every $t \in (0, \tau )$ with $\rho (t)\gt 0$ (where $D^+$ denotes the one-sided derivative with respect to $t$ ).
-
(A2) If $x \in \Omega \cap \partial D$ is such that the set of outward normal vectors $N(x)$ is non-empty and
\begin{equation*}(f(t, x), \nu )_{+} \leq 0 \end{equation*}for all $\nu \in N(x)$ and $t \in (0, a)$ .
Then $D \cap \Omega$ is forward invariant with respect to $f$ , that is, any continuous $x:[0, b) \rightarrow{0,L}ega$ , such that $x(0) \in D$ and $x^{\prime }=f(t, x)$ in $(0, b)$ , satisfies $x(t) \in D$ in $[0, b)$ .