Hostname: page-component-586b7cd67f-dsjbd Total loading time: 0 Render date: 2024-12-07T04:04:52.798Z Has data issue: false hasContentIssue false

Fluctuations of the local times of the self-repelling random walk with directed edges

Published online by Cambridge University Press:  15 September 2023

Laure Marêché*
Affiliation:
Institut de Recherche Mathématique Avancée, UMR 7501 Université de Strasbourg et CNRS
*
*Postal address: 7 rue René Descartes, 67000 Strasbourg, France. Email address: laure.mareche@math.unistra.fr
Rights & Permissions [Opens in a new window]

Abstract

In 2008, Tóth and Vető defined the self-repelling random walk with directed edges as a non-Markovian random walk on $\unicode{x2124}$: in this model, the probability that the walk moves from a point of $\unicode{x2124}$ to a given neighbor depends on the number of previous crossings of the directed edge from the initial point to the target, called the local time of the edge. Tóth and Vető found that this model exhibited very peculiar behavior, as the process formed by the local times of all the edges, evaluated at a stopping time of a certain type and suitably renormalized, converges to a deterministic process, instead of a random one as in similar models. In this work, we study the fluctuations of the local times process around its deterministic limit, about which nothing was previously known. We prove that these fluctuations converge in the Skorokhod $M_1$ topology, as well as in the uniform topology away from the discontinuities of the limit, but not in the most classical Skorokhod topology. We also prove the convergence of the fluctuations of the aforementioned stopping times.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction and results

1.1. Self-interacting random walks

The study of self-interacting random walks began in 1983 in an article of Amit et al. [Reference Amit, Parisi and Peliti1]. Before [Reference Amit, Parisi and Peliti1], the expression ‘self-avoiding random walk’ referred to paths on graphs that do not intersect themselves. However, these are not easy to construct step by step; hence one would consider the set of all possible paths of a given length. Since one does not follow a single path as it grows with time, this is not really a random walk model. In order to work with an actual random walk model with self-avoiding behavior, the authors of [Reference Amit, Parisi and Peliti1] introduced the ‘true’ self-avoiding random walk. This is a random walk on $\mathbb{Z}^d$ for which, at each step, the position of the process at the next step is chosen randomly from among the neighbors of the current position, depending on the number of previous visits to said neighbors, with lower probabilities for those that have been visited the most. This process is a random walk in the sense that it is constructed step by step, but unlike most random walks in the literature, it is non-Markovian: at each step, the law of the next step depends on the whole past of the process.

It turns out that the ‘true’ self-avoiding random walk is hard to study. This led to the introduction by Tóth [Reference Tóth13, Reference Tóth14, Reference Tóth15] of non-Markovian random walks with bond repulsion, for which the probability of going from one site to another, instead of depending of the number of previous visits to the target, depends on the number of previous crossings of the undirected edge between the two sites, which is called the local time of the edge, with lower probabilities for the edges that have been crossed the most in the past. These walks are much easier to study, at least on $\unicode{x2124}$ , because one can apply the Ray–Knight approach to them. This approach was introduced by Ray and Knight in [Reference Ray11, Reference Knight2], and was used for the first time for non-Markovian random walks by Tóth in [Reference Tóth13, Reference Tóth14, Reference Tóth15]. Since then, it has been applied to many other non-Markovian random walks, such as a continuous-time version of the ‘true’ self-avoiding random walk in [Reference Tóth and Vető18], edge-reinforced random walks (see the corresponding part of the review [Reference Pemantle9] and references therein), and excited random walks (see [Reference Kosygina, Mountford and Peterson4] and references therein). The Ray–Knight approach works as follows: though the random walk itself is not Markovian, if we stop it when the local time at a given edge has reached a certain threshold, then the local times on the edges will form a Markov chain, which enables their analysis. Thanks to this approach, Tóth was able to prove scaling limits for the local times process for many different random walks with bond repulsion in his works [Reference Tóth13, Reference Tóth14, Reference Tóth15]. The law of the limit depends on the random walk model, but it is always a random process (the model studied by Tóth in [Reference Tóth16] has a deterministic limit, but it is not a random walk with bond repulsion, as it is self-attracting: the more an edge has been crossed in the past, the more likely it is to be crossed in the future).

1.2. The self-repelling random walk with directed edges

In 2008, Tóth and Vető [Reference Tóth and Vető17] introduced a process seemingly very similar to the aforementioned random walks with bond repulsion, in which the probability of going from one site to another depends on the number of crossings of the directed edge between them, instead of the crossings of the undirected edge. This process, called self-repelling random walk with directed edges, is a nearest-neighbor random walk on $\unicode{x2124}$ defined as follows. For any set A, we denote by $|A|$ the cardinality of A. Let $w \,:\, \mathbb{Z} \mapsto (0,+\infty)$ be a non-decreasing and non-constant function. We will denote the walk by $(X_n)_{n\in\mathbb{N}}$ . We set $X_0=0$ , and for any $n\in\mathbb{N}$ , $i\in\mathbb{Z}$ , we denote by $\ell^\pm(n,i)=|\{0 \leq m \leq n-1 \,|\, (X_m,X_{m+1})=(i,i\pm1)\}|$ the number of crossings of the directed edge $(i,i\pm1)$ before time n, that is, the local time of the directed edge at time n. Then

\begin{equation*}\mathbb{P}(X_{n+1}=X_n\pm1)=\frac{w(\pm(\ell^{-}(n,X_n)-\ell^+(n,X_n)))}{w(\ell^+(n,X_n)-\ell^{-}(n,X_n))+w(\ell^{-}(n,X_n)-\ell^+(n,X_n))}.\end{equation*}

Using the local time of directed edges instead of that of undirected edges may seem like a very small change in the definition of the process, but the behavior of the self-repelling random walk with directed edges is actually very different from that of classical random walks with bond repulsion. Indeed, Tóth and Vető [Reference Tóth and Vető17] were able to prove that the local times process has a deterministic scaling limit, which is in sharp contrast with the random limit processes obtained for the random walks with bond repulsion on undirected edges [Reference Tóth13Reference Tóth15] and even for the simple random walk [Reference Knight2].

The result of [Reference Tóth and Vető17] is as follows. For any $a\in\mathbb{R}$ , we let $a_+=\max(a,0)$ . If for any $n\in\mathbb{N}$ , $i\in\mathbb{Z}$ , we denote by $T_{n,i}^\pm$ the stopping time defined by $T_{n,i}^\pm=\min\{m\in\mathbb{N}\,|\,\ell^\pm(m,i) = n\}$ , then $T_{n,i}^\pm$ is almost surely finite by Proposition 1 of [Reference Tóth and Vető17], and we have the following.

Theorem 1. ([Reference Tóth and Vető17, Theorem 1].) For any $\theta>0$ , $x\in \mathbb{R}$ , we have that

\begin{equation*} \sup_{y \in \mathbb{R}}\left|\frac{1}{N}\ell^+\left(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\pm,\lfloor N y\rfloor\right)-\left(\frac{|x|-|y|}{2}+\theta\right)_+\right| \end{equation*}

converges in probability to 0 when N tends to $+\infty$ .

Thus the local times process of the self-repelling random walk with directed edges admits the deterministic scaling limit $y \mapsto \Big(\frac{|x|-|y|}{2}+\theta\Big)_+$ , which has the shape of a triangle. This also implies the following result on convergence to a deterministic limit for the $T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\pm$ .

Proposition 1. ([Reference Tóth and Vető17, Corollary 1].) For any $\theta>0$ , $x\in \mathbb{R}$ , we have that $\frac{1}{N^2}T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\pm$ converges in probability to $(|x|+2\theta)^2$ when N tends to $+\infty$ .

The deterministic character of these limits makes the behavior of the self-repelling random walk with directed edges very unusual, hence worthy of study. In particular, it is natural to consider the possible fluctuations of the local times process and of the $T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\pm$ around their deterministic limits. However, before the present paper, nothing was known about these fluctuations. In this work, we prove convergence in distribution of the fluctuations of the local times process and of the $T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\pm$ . It happens that the limit of the fluctuations of the local times process is discontinuous; therefore, before stating the results, we have to be careful about the topology in which it may converge.

1.3. Topologies for the convergence of the local times process

For any interval $I \subset R$ , let DI be the space of càdlàg functions on I, that is, the set of functions $I \mapsto \mathbb{R}$ that are right-continuous and have left limits everywhere in I. For any function $Z \,:\, I \mapsto \mathbb{R}$ , we denote by $\|Z\|_\infty=\sup_{y\in I}|Z(y)|$ the uniform norm of Z on I. The uniform norm on I gives a topology on DI, but it is often too strong to deal with discontinuous functions.

For discontinuous càdlàg functions, the most widely used topology is the Skorokhod $J_1$ topology, introduced by Skorokhod in [Reference Skorokhod12] (see [Reference Pollard10, Chapter VI] for a course), which is often called ‘the’ Skorokhod topology. Intuitively, two functions are close in this topology if they are close for the uniform norm after allowing some small perturbation of time. Rigorously, for $a<b$ in $\mathbb{R}$ , the Skorokhod $J_1$ topology on D[a, b] is defined as follows. We denote by $\Lambda_{a,b}$ the set of functions $\lambda \,:\, [a,b] \mapsto [a,b]$ that are bijective, strictly increasing, and continuous (they correspond to the possible perturbations of time), and we denote by $\mathrm{Id}_{a,b} \,:\, [a,b] \mapsto [a,b]$ the identity map, defined by $\mathrm{Id}_{a,b}(y)=y$ for all $y\in[a,b]$ . The Skorokhod $J_1$ topology on D[a, b] is defined through the following metric: for any $Z_1,Z_2 \in D[a,b]$ , we set

\begin{equation*}d_{J_1,a,b}(Z_1,Z_2)=\inf_{\lambda\in\Lambda_{a,b}}\max\big(\|Z_1\circ\lambda-Z_2\|_\infty,\|\lambda-\mathrm{Id}_{a,b}\|_\infty\big).\end{equation*}

It can be proven rather easily that this is indeed a metric. We can then define the Skorokhod $J_1$ topology on $D({-}\infty,\infty)$ with the following metric: if for any sets $A_1 \subset A_2$ and $A_3$ and any function $Z\,:\,A_2 \mapsto A_3$ , we denote by $Z|_{A_1}$ the restriction of Z to $A_1$ , then for $Z_1,Z_2 \in D({-}\infty,\infty)$ , we set

\begin{equation*}d_{J_1}(Z_1,Z_2)=\int_0^{+\infty}e^{-a}\big(d_{J_1,-a,a}\big(Z_1|_{[{-}a,a]},Z_2|_{[{-}a,a]}\big) \wedge 1 \big)\mathrm{d}a.\end{equation*}

The Skorokhod $J_1$ topology is widely used to study the convergence of càdlàg functions. However, when the limit function has a jump, which will be the case here, convergence in the Skorokhod $J_1$ topology requires the converging functions to have a single big jump approximating the jump of the limit process. To account for other cases, like having the jump of the limit functions approximated by several smaller jumps in quick succession or by a very steep continuous slope, one has to use a less restrictive topology, such as the Skorokhod $M_1$ topology.

The Skorokhod $M_1$ topology was also introduced by Skorokhod in [Reference Skorokhod12] (see [Reference Whitt19, Section 3.3] for an overview). For any $a<b$ in $\mathbb{R}$ , the Skorokhod $M_1$ distance on D[a, b] is defined as follows: the distance between two functions will be roughly ‘the distance between the completed graphs of the functions’. More rigorously, if $Z\in D[a,b]$ , we set $Z(a^{-})=Z(a)$ , and for any $y \in (a,b]$ , we set $Z(y^{-})=\lim_{y^{\prime} \to y, y^{\prime}<y}Z(y^{\prime})$ . Then the completed graph of Z is

\begin{equation*}\Gamma_Z=\{(y,z)\,|\,y\in[a,b],\ \exists\,\varepsilon\in[0,1] \textrm{ such that } z=\varepsilon Z(y^{-})+(1-\varepsilon)Z(y)\}.\end{equation*}

To express the ‘distance between two such completed graphs’, we need to define the parametric representations of $\Gamma_Z$ (by abuse of notation, we will often write ‘the parametric representations of Z’). We define an order on $\Gamma_Z$ as follows: for $(y_1,z_1),(y_2,z_2) \in \Gamma_Z$ , we have $(y_1,z_1) \leq (y_2,z_2)$ when $y_1 < y_2$ or when $y_1=y_2$ and $|Z\big(y_1^{-}\big)-z_1|\leq|Z\big(y_1^{-}\big)-z_2|$ . A parametric representation of $\Gamma_Z$ is a continuous, surjective function $(u,r)\,:\,[0,1] \mapsto \Gamma_Z$ that is non-decreasing with respect to this order; thus intuitively, when t goes from 0 to 1, (u(t),r(t)) ‘travels through the completed graph of Z from its beginning to its end’. A parametric representation of Z always exists (see [Reference Whitt19, Remark 12.3.3]). For $Z_1,Z_2 \in D[a,b]$ , the Skorokhod $M_1$ distance between $Z_1$ and $Z_2$ , denoted by $d_{M_1,a,b}(Z_1,Z_2)$ , is $\inf\big\{\max\big(\|u_1-u_2\|_\infty,\|r_1-r_2\|_\infty\big)\big\}$ , where the infimum is on the parametric representations $(u_1,r_1)$ of $Z_1$ and $(u_2,r_2)$ of $Z_2$ . It can be proven that this indeed gives a metric (see [Reference Whitt19, Theorem 12.3.1]), and this metric defines the Skorokhod $M_1$ topology on D[a, b]. For any $a>0$ , we will denote $d_{M_1,-a,a}$ by $d_{M_1,a}$ for short. We can now define the Skorokhod $M_1$ topology in $D({-}\infty,\infty)$ through the following metric: for $Z_1,Z_2 \in D({-}\infty,\infty)$ , we set

\begin{equation*}d_{M_1}(Z_1,Z_2)=\int_0^{+\infty}e^{-a}\big(d_{M_1,a}\big(Z_1|_{[{-}a,a]},Z_2|_{[{-}a,a]}\big) \wedge 1 \big)\mathrm{d}a.\end{equation*}

It can be seen that the Skorokhod $M_1$ topology is weaker than the Skorokhod $J_1$ topology (see [Reference Whitt19, Theorem 12.3.2]), and thus less restrictive. Indeed, since the distance between two functions is roughly ‘the distance between the completed graphs of the functions’, the Skorokhod $M_1$ topology will allow a function with a jump to be the limit of functions with steep slopes or with several smaller jumps. For this reason, the Skorokhod $M_1$ topology is often more suitable when one is considering convergence to a discontinuous function.

1.4. Results

We are now ready to state our results on the convergence of the fluctuations of the local times process. For any $\theta>0$ , $x\in \mathbb{R}$ , $\iota\in\{+,-\}$ , for any $N \in \mathbb{N}^*$ , we define functions $Y_N^{-},Y_N^+$ as follows: for any $y\in\mathbb{R}$ , we set

\begin{equation*}Y_N^\pm(y)=\frac{1}{\sqrt{N}}\left(\ell^\pm\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,\lfloor N y\rfloor\Big)-N\left(\frac{|x|-|y|}{2}+\theta\right)_+\right).\end{equation*}

The functions $Y_N^\pm$ actually depend on $\iota$ , but to make the notation lighter, we do not write this dependency explicitly. Moreover, $\big(B_y^x\big)_{y\in\mathbb{R}}$ will denote a two-sided Brownian motion with $B_x^x=0$ and variance $\mathrm{Var}(\rho_-)$ , where $\rho_-$ is the distribution on $\unicode{x2124}$ defined later in (3). We prove the following convergence for the fluctuations of the local times process of the self-repelling random walk with directed edges.

Theorem 2. For any $\theta>0$ , $x\in \mathbb{R}$ , $\iota\in\{+,-\}$ , the process $Y_N^\pm$ converges in distribution to $\Big(B_y^x \mathbb{1}_{\{y\in[{-}|x|-2\theta,|x|+2\theta)\}}\Big)_{y\in\mathbb{R}}$ in the Skorokhod $M_1$ topology on $D({-}\infty,+\infty)$ when N tends to $+\infty$ .

Therefore, the fluctuations of the local times process have a diffusive limit behavior. However, it is necessary to use the Skorokhod $M_1$ topology here, as the following result states that convergence does not occur in the stronger Skorokhod $J_1$ topology.

Proposition 2. For any $\theta>0$ , $x\in \mathbb{R}$ , $\iota\in\{+,-\}$ , the process $Y_N^\pm$ does not converge in distribution in the Skorokhod $J_1$ topology on $D({-}\infty,+\infty)$ when N tends to $+\infty$ .

We stress the fact that the use of the Skorokhod $M_1$ topology is required only to deal with the discontinuities of the limit process at $-|x|-2\theta$ and $|x|+2\theta$ . Indeed, if we consider the process on an interval that does not include $-|x|-2\theta$ or $|x|+2\theta$ , it converges in the much stronger topology given by the uniform norm, as stated in the following result.

Proposition 3. For any $\theta>0$ , $x\in \mathbb{R}$ , $\iota\in\{+,-\}$ , for any closed interval $I\in\mathbb{R}$ that does not contain $-|x|-2\theta$ or $|x|+2\theta$ , the process $\big(Y_N^\pm(y)\big)_{y\in I}$ converges in distribution to $\big(B_y^x \mathbb{1}_{\{y\in[{-}|x|-2\theta,|x|+2\theta)\}}\big)_{y\in I}$ in the topology on DI given by the uniform norm when N tends to $+\infty$ .

Finally, we also prove the convergence of the fluctuations of $T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\pm$ . For any $\sigma^2>0$ , we denote by $\mathcal{N}\big(0,\sigma^2\big)$ the Gaussian distribution with mean 0 and variance $\sigma^2$ , and we recall that $\rho_-$ will be defined in (3). We then have the following.

Proposition 4. For any $\theta>0$ , $x\in \mathbb{R}$ , $\iota\in\{+,-\}$ , we have that

\begin{equation*}\frac{1}{N^{3/2}}\left( T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota-N^2(|x|+2\theta)^2\right)\end{equation*}

converges in distribution to $\mathcal{N}(0,\frac{32}{3}\mathrm{Var}(\rho_-)((|x|+\theta)^3+\theta^3))$ when N tends to $+\infty$ .

Remark 1. Instead of studying the fluctuations of $\ell^\pm\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,.\Big)$ , it may seem more natural to consider those of $\ell^\pm(N^2,.)$ . However, the Ray–Knight arguments that allow one to study $\ell^\pm\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,.\Big)$ completely break down for $\ell^\pm(N^2,.)$ , and it is not even clear whether these two processes should have the same behavior.

Remark 2. Apart from the article of Tóth and Vető [Reference Tóth and Vető17] that introduced the self-repelling random walk with directed edges, there have been a few other works on this model. These works were motivated by another important question, that of the existence of a scaling limit for $(X_n)_{n \in \mathbb{N}}$ , which means the convergence in distribution of the process $\big(\frac{1}{N^\alpha}X_{\lfloor Nt\rfloor}\big)_{t \geq 0}$ for some $\alpha$ . Obtaining such a scaling limit for the trajectory of the random walk is harder than obtaining scaling limits for the local times. Indeed, for the random walks with bond repulsion with undirected edges introduced by Tóth in [Reference Tóth13Reference Tóth15], the scaling limits for the local times have been known since the introduction of the models, but the scaling limits for the trajectories are not known. Some results were proven by Kosygina, Mountford, and Peterson in [Reference Kosygina, Mountford and Peterson3], but they do not cover all models. For the self-repelling random walk with directed edges, the behavior of the scaling limit of the trajectory turns out to be surprising. Indeed, Mountford, Pimentel, and Valle proved in [Reference Mountford, Pimentel and Valle7] that $\frac{1}{\sqrt{N}}X_N$ converges in distribution, but Mountford and the author showed in [Reference Marêché and Mountford6] that $\big(\frac{1}{\sqrt{N}}X_{\lfloor Nt \rfloor}\big)_{t \geq 0}$ does not converge in distribution, and that the trajectories of the walk satisfy a more complex limit theorem, of a new kind.

1.5. Proof ideas

We begin by explaining why the limit of the local times process $Y_N^\pm$ is the process $\big(B_y^x \mathbb{1}_{\{y\in[{-}|x|-2\theta,|x|+2\theta)\}}\big)_{y\in\mathbb{R}}$ , and we describe the ideas behind the proofs of Theorem 2 and Proposition 3. To show the convergence of the local times process, we use a Ray–Knight argument; that is, we notice that $\Big(\ell^{-}\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,i\Big)\Big)_i$ is a Markov chain. Moreover, as long as $\ell^{-}\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,i\Big)$ is not too low, the quantities

\begin{equation*}\ell^{-}\left(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,i+1\right)-\ell^{-}\left(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,i\right)\end{equation*}

will roughly be independent and identically distributed (i.i.d.) random variables, in the sense that they can be coupled with i.i.d. random variables with a high probability of being equal to them. This coupling was already used in [Reference Tóth and Vető17] to prove the convergence of $\frac{1}{N}\ell^+\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\pm,\lfloor N y\rfloor\Big)$ to its deterministic limit (for a given y, the coupling makes this convergence a law of large numbers). However, when $\ell^{-}\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,\lfloor N y\rfloor\Big)$ is too low, the coupling fails and the $\ell^{-}\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,\lfloor N y\rfloor+1\Big)-\ell^{-}\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,\lfloor N y\rfloor\Big)$ are no longer i.i.d. We have to prove that this occurs only around $|x|+2\theta$ and $-|x|-2\theta$ , and most of our work is dealing with what happens there. To show that it occurs only around $|x|+2\theta$ and $-|x|-2\theta$ , we control the amplitude of the fluctuations to prove that the local times are close to their deterministic limit. This limit is large inside $({-}|x|-2\theta,|x|+2\theta)$ , so we can use the coupling inside this interval; thus the $\ell^{-}\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,\lfloor N y\rfloor+1\Big)-\ell^{-}\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,\lfloor N y\rfloor\Big)$ are roughly i.i.d. there, and hence the fluctuations will converge to a Brownian motion by Donsker’s invariance principle. When we are close to $|x|+2\theta$ (the same reasoning works for $-|x|-2\theta$ ), the deterministic limit will be small, and hence the local times will also be small; tools from [Reference Tóth and Vető17] then allow us to prove that they reach 0 quickly. Once they reach 0, we notice that for $y \geq |x|+2\theta$ , if $\ell^{-}\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,\lfloor N y\rfloor\Big)=0$ , then the walk X did not go from $\lfloor N y\rfloor$ to $\lfloor N y\rfloor+1$ before time $T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota$ , so it did not go to $\lfloor N y\rfloor+1$ before time $T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota$ ; hence $\ell^{-}\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,j\Big)=0$ for any $j \geq \lfloor N y\rfloor$ . Therefore, once the local times process reaches 0, it stays there. Consequently, we expect $\ell^{-}\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,\lfloor N y\rfloor\Big)$ to be 0 when $y > |x|+2\theta$ , and thus to have no fluctuations when $y > |x|+2\theta$ ; similar statements hold when $y < -|x|-2\theta$ . This is why our limit is $\Big(B_y^x \mathbb{1}_{\{y\in[{-}|x|-2\theta,|x|+2\theta)\}}\Big)_{y\in\mathbb{R}}$ .

Since Proposition 3 only describes convergence away from $-|x|-2\theta$ and $|x|+2\theta$ , the previous arguments are enough to prove it. To prove the convergence in the Skorokhod $M_1$ topology on $D({-}\infty,+\infty)$ stated in Theorem 2, we need to handle what happens around $-|x|-2\theta$ and $|x|+2\theta$ with more precision. We first have to bound the difference between the local times and the i.i.d. random variables of the coupling even where the coupling fails. Afterwards comes the most important part of the paper: defining parametric representations of $Y_N^\pm$ and of the sum of the i.i.d. random variables of the coupling, properly renormalized and set to 0 outside of $[{-}|x|-2\theta,|x|+2\theta)$ , and then proving that they are close to each other. That allows us to prove that $Y_N^\pm$ is close in the Skorokhod $M_1$ distance to a process that will converge in distribution to $\big(B_y^x \mathbb{1}_{\{y\in[{-}|x|-2\theta,|x|+2\theta)\}}\big)_{y\in\mathbb{R}}$ in the Skorokhod $M_1$ topology, which lets us complete the proof of Theorem 2.

To prove Proposition 2, that is, that $Y_N^\pm$ does not converge in the $J_1$ topology, we first notice that since the $J_1$ topology is stronger than the $M_1$ topology, if $Y_N^\pm$ did converge in the $J_1$ topology its limit would be $\big(B_y^x \mathbb{1}_{\{y\in[{-}|x|-2\theta,|x|+2\theta)\}}\big)_{y\in\mathbb{R}}$ . However, this is not possible, as $\big(B_y^x \mathbb{1}_{\{y\in[{-}|x|-2\theta,|x|+2\theta)\}}\big)_{y\in\mathbb{R}}$ has a jump at $|x|+2\theta$ , while the jumps of $Y_N^\pm$ have typical size of order $\frac{1}{\sqrt{N}}$ , so the jump in $\big(B_y^x \mathbb{1}_{\{y\in[{-}|x|-2\theta,|x|+2\theta)\}}\big)_{y\in\mathbb{R}}$ is approximated in $Y_N^\pm$ by either a sequence of small jumps or a continuous slope, which prevents the convergence in the Skorokhod $J_1$ topology.

Finally, to prove Proposition 4 on the fluctuations of $T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota$ , we use the fact that we have

\begin{equation*}T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota=\sum_{i\in\mathbb{Z}}\left(\ell^+\left(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,i \right)+\ell^{-}\left(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,i\right)\right).\end{equation*}

It can be checked that $\Big|\ell^+\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,i\Big)-\ell^{-}\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,i+1\Big)\Big|$ equals 0 or 1; hence it is enough to control the $\ell^{-}\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,i\Big)$ . By using the coupling for the

\begin{equation*}\ell^{-}\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,i+1\Big)-\ell^{-}\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,i\Big)\end{equation*}

when $\ell^{-}\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,i\Big)$ is high enough, and using our estimates on the size of the window in which $\ell^{-}\Big(T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota,i\Big)$ is neither high enough nor 0, we can prove that $T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota$ is close to the integral of the sum of the i.i.d. random variables of the coupling, which will yield the convergence.

1.6. Organization of the paper

In Section 2, we define the coupling between the increments of the local time and i.i.d. random variables and prove some of its properties. In Section 3, we control where the local times hit 0, as well as where the local times are too low for the coupling of Section 2 to be useful. In Section 4, we prove a bound on the Skorokhod $M_1$ distance between $Y_N^\pm$ and the renormalized sum of the i.i.d. random variables of the coupling set to 0 outside of $[{-}|x|-2\theta,|x|+2\theta)$ , by writing explicit parametric representations of the two functions. In Section 5, we complete the proof of the convergence of $Y_N^\pm$ stated in Theorem 2 and Proposition 3. In Section 6 we prove that, as claimed in Proposition 2, $Y_N^\pm$ does not converge in the $J_1$ topology. Finally, in Section 7 we prove the convergence of the fluctuations of $T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\pm$ stated in Proposition 4.

In what follows, we set $\theta>0$ , $\iota\in\{+,-\}$ , and $x > 0$ (the cases $x<0$ and $x=0$ can be dealt with in the same way). To simplify the notation, we set $T_N=T_{\lfloor N \theta\rfloor,\lfloor N x\rfloor}^\iota$ . Moreover, for any $a,b\in\mathbb{R}$ , we set $a \vee b=\max(a,b)$ and $a \wedge b=\min(a,b)$ .

2. Coupling of the local times increments with i.i.d. random variables

Our goal in this section will be to couple the $\ell^\pm(T_N,i+1)-\ell^\pm(T_N,i)$ with i.i.d. random variables and to prove some properties of this coupling. This part of the work is not very different from what was done in [Reference Tóth and Vető17], but we still recall the concepts and definitions from that paper. If we fix $i\in\mathbb{Z}$ and observe the evolution of $(\ell^{-}(n,i)-\ell^+(n,i))_{n\in\mathbb{N}}$ , and if we ignore the steps at which $\ell^{-}(n,i)-\ell^+(n,i)$ does not move (i.e. those at which the random walk is not at i), then we obtain a Markov chain $\xi_i$ whose distribution $\xi$ has the following transition probabilities: for all $n \in \mathbb{N}$ ,

\begin{equation*}\mathbb{P}(\xi(n+1)=\xi(n)\pm1)=\frac{w(\mp\xi(n))}{w(\xi(n))+w({-}\xi(n))},\end{equation*}

and $\xi_i(0)=0$ . Now, we set $\tau_{i,\pm}(0)=0$ , and for any $n\in\mathbb{N}$ , we define $\tau_{i,\pm}(n+1)=\inf\{m>\tau_{i,\pm}(n)\,|\,\xi_i(m)=\xi_i(m-1)\pm1\}$ , so that $\tau_{i,+}(n)$ is the time of the nth upward step of $\xi_i$ and $\tau_{i,-}(n)$ is the time of the nth downward step of $\xi_i$ . Then, since the distribution of $\xi$ is symmetric, the processes $(\eta_{i,+}(n))_{n\in\mathbb{N}}=({-}\xi_{i}(\tau_{i,+}(n)))_{n\in\mathbb{N}}$ and $(\eta_{i,-}(n))_{n\in\mathbb{N}}=(\xi_{i}(\tau_{i,-}(n)))_{n\in\mathbb{N}}$ have the same distribution, called $\eta$ , and it can be checked that $\eta$ is a Markov chain.

We are going to give an expression for $\ell^\pm(T_N,i+1)-\ell^\pm(T_N,i)$ depending on the $\eta_{i,-}$ , $\eta_{i,+}$ . We assume N large enough (so that $\lfloor N x\rfloor-1>0$ ). By definition of $T_N$ we have $X_{T_N}=\lfloor N x\rfloor\iota 1$ . If $i \leq 0$ we thus have $X_{T_N}>i$ , which means the last step of the walk at i before $T_N$ was going to the right, so the last step of $\xi_i$ was a downward step, and by definition of $\ell^+(T_N,i)$ we have that $\xi_i$ made $\ell^+(T_N,i)$ downward steps; hence

\begin{equation*}\ell^{-}(T_N,i)-\ell^+(T_N,i)=\xi_i(\tau_{i,-}(\ell^+(T_N,i)))=\eta_{i,-}(\ell^+(T_N,i)),\end{equation*}

which yields $\ell^{-}(T_N,i)-\ell^+(T_N,i)=\eta_{i,-}(\ell^+(T_N,i))$ . In addition, $\ell^{-}(T_N,i)=\ell^+(T_N,i-1)$ ; hence

\begin{equation*}\ell^+(T_N,i-1)=\ell^+(T_N,i)+\eta_{i,-}(\ell^+(T_N,i)).\end{equation*}

If $0 < i < \lfloor N x\rfloor$ (for $\iota=-$ ) or $0 < i \leq \lfloor N x\rfloor$ (for $\iota=+$ ), the last step of the walk at i was also going to the right, so we also have $\ell^{-}(T_N,i)-\ell^+(T_N,i)=\eta_{i,-}(\ell^+(T_N,i))$ . However, $\ell^{-}(T_N,i)=\ell^+(T_N,i-1)-1$ , so $\ell^+(T_N,i-1)=\ell^+(T_N,i)+\eta_{i,-}(\ell^+(T_N,i))+1$ . Finally, if $i \geq \lfloor N x\rfloor$ (for $\iota=-$ ) or $i > \lfloor N x\rfloor$ (for $\iota=+$ ), then the last step of the walk at i was going to the left, so the last step of $\xi_i$ was an upward step, and $\xi_i$ made $\ell^{-}(T_N,i)$ upward steps; therefore

\begin{equation*}\ell^{-}(T_N,i)-\ell^+(T_N,i)=\xi_i\big(\tau_{i,+}\big(\ell^{-}\big(T_N,i\big)\big)\big)=-\eta_{i,+}\big(\ell^{-}\big(T_N,i\big)\big),\end{equation*}

which yields $\ell^{-}(T_N,i)-\ell^+(T_N,i)=-\eta_{i,+}\big(\ell^{-}\big(T_N,i\big)\big)$ . Moreover, $\ell^+(T_N,i)=\ell^{-}(T_N,i+1)$ , and hence $\ell^{-}(T_N,i+1)=\ell^{-}(T_N,i)+\eta_{i,+}\big(\ell^{-}\big(T_N,i\big)\big)$ .

We are going to use these results to deduce an expression for the $\ell^\pm(T_N,i)$ which will be very useful throughout this work. Defining $\chi(N)=\lfloor N x\rfloor$ if $\iota=-$ and $\chi(N)=\lfloor N x\rfloor+1$ if $\iota=+$ , for $i \geq \chi(N)$ we have

\begin{equation*}\ell^{-}(T_N,i)=\ell^{-}(T_N,\chi(N))+\sum_{j=\chi(N)}^{i-1}\eta_{j,+}(\ell^{-}(T_N,j)),\end{equation*}

and for $i < \chi(N)$ we have

\begin{equation*}\ell^+(T_N,i)=\ell^+(T_N,\chi(N)-1)+\sum_{j=i+1}^{\chi(N)-1}\big(\eta_{j,-}(\ell^+(T_N,j))+\mathbb{1}_{\{j > 0\}}\big).\end{equation*}

Now, we remember that the definition of $T_N$ implies $\ell^\iota(T_N,\lfloor N x\rfloor)=\lfloor N \theta\rfloor$ , so if $\iota=-$ we have $\ell^{-}(T_N,\chi(N))=\lfloor N \theta\rfloor$ and $\ell^+(T_N,\chi(N)-1)=\ell^{-}(T_N,\chi(N))=\lfloor N \theta\rfloor$ , while if $\iota=+$ we have $\ell^+(T_N,\chi(N)-1)=\lfloor N \theta\rfloor$ and $\ell^{-}(T_N,\chi(N))=\ell^+(T_N,\chi(N)-1)-1=\lfloor N \theta\rfloor-1$ . Consequently, we have the following:

(1) \begin{equation}\begin{split} \textrm{If }i \geq \chi(N),& \quad \ell^{-}(T_N,i)=\lfloor N \theta\rfloor-\mathbb{1}_{\{\iota=+\}}+\sum_{j=\chi(N)}^{i-1}\eta_{j,+}(\ell^{-}(T_N,j)). \\ \textrm{If }i < \chi(N),& \quad \ell^+(T_N,i)=\lfloor N \theta\rfloor+\sum_{j=i+1}^{\chi(N)-1}\big(\eta_{j,-}(\ell^+(T_N,j))+\mathbb{1}_{\{j > 0\}}\big). \end{split}\end{equation}

We will also need to remember the following:

(2) \begin{equation}\begin{split} &\textrm{If }i \geq \chi(N),\quad \ell^{-}(T_N,i)-\ell^+(T_N,i)=-\eta_{i,+}\big(\ell^{-}\big(T_N,i\big)\big). \\ &\textrm{ If } i < \chi(N),\quad \ell^{-}(T_N,i)-\ell^+(T_N,i)=\eta_{i,-}(\ell^+(T_N,i)). \end{split}\end{equation}

To couple the $\ell^\pm(T_N,i+1)-\ell^\pm(T_N,i)$ with i.i.d. random variables, we need to understand the $\eta_{i,+}\big(\ell^{-}\big(T_N,i\big)\big)$ and the $\eta_{i,-}(\ell^+(T_N,i))$ . The paper [Reference Tóth and Vető17] proved that the following measure $\rho_-$ is the unique invariant probability distribution of the Markov chain $\eta$ :

(3) \begin{equation} \forall i \in \mathbb{Z}, \quad \rho_-(i)=\frac{1}{R} \prod_{j=1}^{\lfloor|2i+1|/2\rfloor}\frac{w({-}j)}{w(j)}\quad\textrm{with}\quad R=\sum_{i\in\mathbb{Z}}\prod_{j=1}^{\lfloor|2i+1|/2\rfloor}\frac{w({-}j)}{w(j)}.\end{equation}

We also denote by $\rho_0$ the measure on $\frac{1}{2}+\mathbb{Z}$ defined by $\rho_0(\cdot)=\rho_-\big(\cdot-\frac{1}{2}\big)$ .

We are now in position to construct the coupling of the $\ell^\pm(T_N,i+1)-\ell^\pm(T_N,i)$ with i.i.d. random variables $(\zeta_i)_{i\in\mathbb{Z}}$ . The idea is that $\eta$ can be expected to converge to its invariant distribution $\rho_-$ ; hence when $\ell^\pm(T_N,i)$ is large, $\eta_{i,\mp}\big(\ell^\pm(T_N,i)\big)$ will be close to a random variable of law $\rho_-$ . More rigorously, we begin by defining an i.i.d. sequence $(r_i)_{i\in\mathbb{Z}}$ of random variables of distribution $\rho_-$ so that if $i \geq \chi(N)$ , then $\mathbb{P}\big(r_i \neq \eta_{i,+}\big(\big\lfloor N^{1/6}\big\rfloor\big)\big)$ is minimal, and if $i < \chi(N)$ , then $\mathbb{P}\big(r_i \neq \eta_{i,-}\big(\big\lfloor N^{1/6}\big\rfloor\big)\big)$ is minimal. We can then define i.i.d. Markov chains $(\bar\eta_{i,+}(n))_{n \geq \big\lfloor N^{1/6}\big\rfloor}$ for $i \geq \chi(N)$ and $(\bar\eta_{i,-}(n))_{n \geq \big\lfloor N^{1/6}\big\rfloor}$ for $i < \chi(N)$ so that $\bar\eta_{i,\pm}\big(\big\lfloor N^{1/6}\big\rfloor\big)=r_i$ , $\bar\eta_{i,\pm}$ is a Markov chain of distribution equal to that of $\eta$ , and if $\bar\eta_{i,\pm}\big(\big\lfloor N^{1/6}\big\rfloor\big)=\eta_{i,\pm}\big(\big\lfloor N^{1/6}\big\rfloor\big)$ , then $\bar\eta_{i,\pm}(n)=\eta_{i,\pm}(n)$ for any $n \geq \big\lfloor N^{1/6}\big\rfloor$ . Since $\rho_-$ is invariant for $\eta$ , if $n \geq \big\lfloor N^{1/6}\big\rfloor$ , then the $\bar\eta_{i,+}(n)$ for $i \geq \chi(N)$ and $\bar\eta_{i,-}(n)$ for $i < \chi(N)$ have distribution $\rho_-$ . We define the random variables $(\zeta_i)_{i\in\mathbb{Z}}$ as follows: for $i \geq \chi(N)$ we set $\zeta_i=\bar\eta_{i,+}\big(\ell^{-}(T_N,i)\vee\big\lfloor N^{1/6}\big\rfloor\big)+\frac{1}{2}$ , and for $i < \chi(N)$ we set $\zeta_i=\bar\eta_{i,-}\big(\ell^+(T_N,i)\vee\big\lfloor N^{1/6}\big\rfloor\big)+\frac{1}{2}$ . For $i \geq \chi(N)$ , (1) implies that $\ell^{-}(T_N,i)$ depends only on the $\eta_{j,+}$ , $\chi(N) \leq j \leq i-1$ , and hence is independent from $\bar\eta_{i,+}$ , which implies that $\zeta_i$ has distribution $\rho_0$ and is independent from the $\zeta_j$ , $\chi(N) \leq j \leq i-1$ . This together with a similar argument for $i < \chi(N)$ implies that the $(\zeta_i)_{i\in\mathbb{Z}}$ are i.i.d. with distribution $\rho_0$ .

We will prove several properties of $(\zeta_i)_{i\in\mathbb{Z}}$ that we will use in the remainder of the proof. In order to do that, we need the following lemma from [Reference Tóth and Vető17].

Lemma 1. ([Reference Tóth and Vető17, Lemma 1].) There exist two constants $\tilde c =\tilde c(w)>0$ and $\tilde C = \tilde C(w) < +\infty$ such that for any $n\in\mathbb{N}$ ,

\begin{equation*} \mathbb{P}(\eta(n)=i|\eta(0)=0) \leq \tilde C e^{-\tilde c |i|} \quad {and} \quad \sum_{i\in\mathbb{Z}}|\mathbb{P}(\eta(n)=i|\eta(0)=0)-\rho_-(i)| \leq \tilde C e^{-\tilde c n}. \end{equation*}

Firstly, we want to prove that our coupling is actually useful: that the $\zeta_i$ are close to the $\ell^\pm(T_N,i+1)-\ell^\pm(T_N,i)$ . More precisely, we will show that except on an event of probability tending to 0, if $\ell^\pm(T_N,i)$ is large then $\zeta_i=\eta_{i,\mp}(\ell^\pm(T_N,i))+1/2$ , which (1) relates to $\ell^\pm(T_N,i+1)-\ell^\pm(T_N,i)$ . We define

(4) \begin{equation}\begin{split} \mathcal{B}_1^{-}=\{\exists\, i\in\{-\lceil2(|x|+2\theta)N\rceil,\ldots,\chi(N)-1\}, \ell^+(T_N,i)\geq\big\lfloor N^{1/6}\big\rfloor\\ \textrm{ and }\zeta_i\neq\eta_{i,-}(\ell^+(T_N,i))+1/2\}, \\ \mathcal{B}_1^+=\{\exists\, i\in\{\chi(N),\ldots,\lceil2(|x|+2\theta)N\rceil\}, \ell^{-}(T_N,i)\geq\big\lfloor N^{1/6}\big\rfloor \qquad\;\,\\ \textrm{ and }\zeta_i\neq\eta_{i,+}\big(\ell^{-}\big(T_N,i\big)\big)+1/2\}. \end{split}\end{equation}

Lemma 1 will allow us to prove the following.

Lemma 2. $\mathbb{P}\big(\mathcal{B}_1^{-}\big)$ and $\mathbb{P}\big(\mathcal{B}_1^+\big)$ tend to 0 when $N \to +\infty$ .

Proof. By definition, for any $i\in\{-\lceil2(|x|+2\theta)N\rceil,\ldots,\chi(N)-1\}$ we have

\begin{equation*}\zeta_i=\bar\eta_{i,-}\big(\ell^+(T_N,i)\vee\big\lfloor N^{1/6}\big\rfloor\big)+\frac{1}{2},\end{equation*}

which is $\bar\eta_{i,-}(\ell^+(T_N,i))+\frac{1}{2}$ when $\ell^+(T_N,i)\geq\big\lfloor N^{1/6}\big\rfloor$ . Now, $\bar\eta_{i,-}=\eta_{i,-}$ if $\bar\eta_{i,-}\big(\big\lfloor N^{1/6}\big\rfloor\big)=\eta_{i,-}\big(\big\lfloor N^{1/6}\big\rfloor\big)$ ; that is, $r_i=\eta_{i,-}\big(\big\lfloor N^{1/6}\big\rfloor\big)$ . We deduce that

\begin{equation*}\mathbb{P}\big(\mathcal{B}_1^{-}\big) \leq \mathbb{P}\big(\exists\, i\in\{-\lceil2(|x|+2\theta)N\rceil,\ldots,\chi(N)-1\},r_i\neq\eta_{i,-}\big(\big\lfloor N^{1/6}\big\rfloor\big)\big).\end{equation*}

Now, for any $i < \chi(N)$ , we have $\mathbb{P}\big(r_i\neq\eta_{i,-}\big(\big\lfloor N^{1/6}\big\rfloor\big)\big)$ minimal, and thus smaller than $\tilde C e^{-\tilde c \big\lfloor N^{1/6}\big\rfloor}$ by Lemma 1. Consequently, when N is large enough, we have $\mathbb{P}\big(\mathcal{B}_1^{-}\big) \leq 3(|x|+2\theta)N\tilde C e^{-\tilde c \big\lfloor N^{1/6}\big\rfloor}$ , which tends to 0 when $N \to +\infty$ . The proof for $\mathbb{P}\big(\mathcal{B}_1^+\big)$ is the same.

Unfortunately, the previous lemma does not allow us to control the local times when $\ell^\pm(T_N,i)$ is small. In order to do that, we show several additional properties. We have to control the probability of

\begin{equation*}\begin{split} \mathcal{B}_2=&\big\{\exists \, i \in \{-\lceil2(|x|+2\theta)N\rceil,\ldots,\lceil2(|x|+2\theta)N\rceil\}, |\zeta_i| \geq N^{1/16}\big\} \\ &\cup \big\{\exists \, i \in \{-\lceil2(|x|+2\theta)N\rceil,\ldots,\chi(N)-1\}, |\eta_{i,-}(\ell^+(T_N,i))+1/2| \geq N^{1/16}\big\} \\ &\cup \big\{\exists \, i \in \{\chi(N),\ldots,\lceil2(|x|+2\theta)N\rceil\}, |\eta_{i,+}\big(\ell^{-}\big(T_N,i\big)\big)+1/2| \geq N^{1/16}\big\}. \end{split}\end{equation*}

Lemma 3. $\mathbb{P}(\mathcal{B}_2)$ tends to 0 when N tends to $+\infty$ .

Proof. It is enough to find some constants $c>0$ and $C < +\infty$ such that for any $i \in \{-\lceil2(|x|+2\theta)N\rceil,\ldots,\lceil2(|x|+2\theta)N\rceil\}$ we have

\begin{equation*}\mathbb{P}\big(|\zeta_i| \geq N^{1/16}\big) \leq Ce^{-cN^{1/16}},\end{equation*}

for any $i \in \{-\lceil2(|x|+2\theta)N\rceil,\ldots,\chi(N)-1\}$ we have

\begin{equation*}\mathbb{P}\big(|\eta_{i,-}(\ell^+(T_N,i))+1/2| \geq N^{1/16}\big) \leq Ce^{-cN^{1/16}},\end{equation*}

and for all $i \in \{\chi(N),\ldots,\lceil2(|x|+2\theta)N\rceil\}$ we have

\begin{equation*}\mathbb{P}\big(|\eta_{i,+}\big(\ell^{-}\big(T_N,i\big)\big)+1/2| \geq N^{1/16}\big) \leq Ce^{-cN^{1/16}}.\end{equation*}

For all $i\in\mathbb{Z}$ , $\zeta_i$ has distribution $\rho_0$ , which has exponential tails; hence there exist constants $c^{\prime}=c^{\prime}(w)>0$ and $C^{\prime}=C^{\prime}(w) < +\infty$ such that for $i \in \{-\lceil2(|x|+2\theta)N\rceil,\ldots,\lceil2(|x|+2\theta)N\rceil\}$ we have $\mathbb{P}\big(|\zeta_i| \geq N^{1/16}\big) \leq C^{\prime}e^{-c^{\prime}N^{1/16}}$ . We now consider $i \in \{-\lceil2(|x|+2\theta)N\rceil,\ldots,\chi(N)-1\}$ and $\mathbb{P}\big(|\eta_{i,-}(\ell^+(T_N,i))+1/2| \geq N^{1/16}\big)$ (the $\mathbb{P}\big(|\eta_{i,+}\big(\ell^{-}\big(T_N,i\big)\big)+1/2| \geq N^{1/16}\big)$ can be dealt with in the same way). Equation (1) implies that $\ell^+(T_N,i)$ depends only on the $\eta_{j,-}$ for $j>i$ , and hence is independent of $\eta_{i,-}$ . This implies that

\begin{equation*}\mathbb{P}\big(|\eta_{i,-}(\ell^+(T_N,i))+1/2| \geq N^{1/16}\big) = \sum_{k\in\mathbb{N}}\mathbb{P}\big(|\eta_{i,-}(k)+1/2| \geq N^{1/16}\big)\mathbb{P}\big(\ell^+(T_N,i)=k\big).\end{equation*}

Therefore the first part of Lemma 1 implies that

\begin{align*}\mathbb{P}\big(|\eta_{i,-}(\ell^+(T_N,i))+1/2| \geq N^{1/16}\big) &\leq \sum_{k\in\mathbb{N}}\frac{2\tilde C e^{\tilde c /2}}{1-e^{-\tilde c}}e^{-\tilde c N^{1/16}}\mathbb{P}\big(\ell^+(T_N,i)=k\big)\\&=\frac{2\tilde C e^{\tilde c /2}}{1-e^{-\tilde c}}e^{-\tilde c N^{1/16}},\end{align*}

which is enough.

We will also need the following, which is a fairly standard result on large deviations.

Lemma 4. For any $\alpha>0$ , $\varepsilon>0$ , $\mathbb{P}\Big(\max_{0 \leq i_1 \leq i_2 \leq \lceil N^\alpha\rceil}\left|\sum_{i=i_1}^{i_2}\zeta_i \right| \geq N^{\alpha/2+\varepsilon}\Big)$ tends to 0 when $N \to +\infty$ .

Proof. Let $0 \leq i_1 \leq i_2 \leq \lceil N^\alpha\rceil$ , and let us study $\mathbb{P}\Big(\left|\sum_{i=i_1}^{i_2}\zeta_i \right| \geq N^{\alpha/2+\varepsilon}\Big)$ . We know the $\zeta_i$ , $i\in\mathbb{Z}$ , are i.i.d. with distribution $\rho_0$ , and it can be checked that $\rho_0$ is symmetric with respect to 0, so from that and the Markov inequality we get

(5) \begin{equation} \begin{split} &\qquad\qquad\qquad\mathbb{P}\left(\left|\sum_{i=i_1}^{i_2}\zeta_i\right| \geq N^{\alpha/2+\varepsilon}\right) \leq 2 \mathbb{P}\left(\sum_{i=i_1}^{i_2}\zeta_i \geq N^{\alpha/2+\varepsilon}\right) \\ &=2\mathbb{P}\left(\exp\left(\frac{1}{N^{\alpha/2}}\sum_{i=i_1}^{i_2}\zeta_i\right) \geq \exp(N^{\varepsilon})\right) \leq 2e^{-N^{\varepsilon}}\mathbb{E}\left(\exp\left(\frac{1}{N^{\alpha/2}}\sum_{i=i_1}^{i_2}\zeta_i\right)\right)\\ &\qquad\qquad\qquad\qquad\quad\leq 2e^{-N^{\varepsilon}}\prod_{i=i_1}^{i_2}\mathbb{E}\left(\exp\left(\frac{1}{N^{\alpha/2}}\zeta_i\right)\right). \end{split} \end{equation}

Now, if $\zeta$ has distribution $\rho_0$ , we can write

\begin{equation*} \exp\left(\frac{1}{N^{\alpha/2}}\zeta\right)=1+\frac{1}{N^{\alpha/2}}\zeta+\frac{1}{2}\left(\frac{1}{N^{\alpha/2}}\zeta\right)^2\exp\left(\frac{1}{N^{\alpha/2}}\zeta^{\prime}\right) \end{equation*}

with $|\zeta^{\prime}|\leq|\zeta|$ . Since $\rho_0$ is symmetric with respect to 0, we have $\mathbb{E}(\zeta)=0$ ; therefore

\begin{equation*} \mathbb{E}\left(\exp\left(\frac{1}{N^{\alpha/2}}\zeta\right)\right) = 1+\mathbb{E}\left(\frac{1}{2}\left(\frac{1}{N^{\alpha/2}}\zeta\right)^2\exp\left(\frac{1}{N^{\alpha/2}}\zeta^{\prime}\right)\right) \end{equation*}
\begin{equation*} \leq 1+\frac{1}{2N^\alpha}\mathbb{E}\left(\zeta^2\exp\left(\frac{1}{N^{\alpha/2}}|\zeta|\right)\right). \end{equation*}

Moreover, $\rho_0$ has exponential tails; hence there exist constants $C<+\infty$ and $c>0$ such that $\mathbb{E}\big(\zeta^2e^{c \,|\zeta|}\big) \leq C$ . When N is large enough, $\frac{1}{N^{\alpha/2}} \leq c$ ; therefore

\begin{equation*}\mathbb{E}\Big(\exp\Big(\frac{1}{N^{\alpha/2}}\zeta\Big)\Big) \leq 1+\frac{C}{2N^\alpha}\leq \exp\Big(\frac{C}{2N^\alpha}\Big).\end{equation*}

Together with (5), this yields

\begin{equation*}\mathbb{P}\left(\left|\sum_{i=i_1}^{i_2}\zeta_i \right| \geq N^{\alpha/2+\varepsilon}\right)\leq 2e^{-N^{\varepsilon}}e^{(i_2-i_1+1)\frac{C}{2N^\alpha}} \leq 2e^{-N^{\varepsilon}}e^{\big(\lceil N^\alpha\rceil+1\big)\frac{C}{2N^\alpha}} \leq 2e^{C}e^{-N^{\varepsilon}}\end{equation*}

when N is large enough. We deduce that when N is large enough,

\begin{equation*}\mathbb{P} \left(\max_{0 \leq i_1 \leq i_2 \leq \lceil N^\alpha\rceil} \left|\sum_{i=i_1}^{i_2}\zeta_i \right| \geq N^{\alpha/2+\varepsilon}\right) \leq (\lceil N^\alpha\rceil+1)^2 2e^{C}e^{-N^{\varepsilon}},\end{equation*}

which tends to 0 when N tends to $+\infty$ .

We also prove an immediate application of Lemma 4, which we will use several times. If we define

\begin{equation*} \mathcal{B}_3^{-}=\left\{\max_{-\lfloor (|x|+2\theta)N\rfloor-N^{3/4} \leq i_1 \leq i_2 \leq -\lfloor (|x|+2\theta)N\rfloor+N^{3/4}} \left|\sum_{i=i_1}^{i_2} \zeta_i\right| \geq N^{19/48}\right\}, \end{equation*}
\begin{equation*} \mathcal{B}_3^+=\left\{\max_{\lfloor (|x|+2\theta)N\rfloor-N^{3/4} \leq i_1 \leq i_2 \leq \lfloor (|x|+2\theta)N\rfloor+N^{3/4}} \left|\sum_{i=i_1}^{i_2} \zeta_i\right| \geq N^{19/48}\right\},\end{equation*}

we have the following lemma.

Lemma 5. $\mathbb{P}\big(\mathcal{B}_3^{-}\big)$ and $\mathbb{P}\big(\mathcal{B}_3^+\big)$ tend to 0 when N tends to $+\infty$ .

Proof. Since the $(\zeta_i)_{i\in\mathbb{Z}}$ are i.i.d.,

\begin{equation*}\mathbb{P}\big(\mathcal{B}_3^+\big)=\mathbb{P}\big(\mathcal{B}_3^{-}\big)=\mathbb{P}\left(\max_{0 \leq i_1 \leq i_2 \leq 2 \lceil N^{3/4} \rceil} \left|\sum_{i=i_1}^{i_2} \zeta_i \right| \geq N^{19/48}\right),\end{equation*}

which is smaller than $\mathbb{P}\Big(\max_{0 \leq i_1 \leq i_2 \leq \lceil N^{37/48} \rceil} \Big|\sum_{i=i_1}^{i_2} \zeta_i\Big| \geq N^{19/48}\Big)$ when N is large enough. Moreover, Lemma 4, used with $\alpha=37/48$ and $\varepsilon=1/96$ , yields that the latter probability tends to 0 when N tends to $+\infty$ .

3. Where the local times approach 0

The aim of this section is twofold. Firstly, we need to control the place where $\ell^{-}(T_N,i)$ hits 0 when i is to the right of 0, as well as the place where $\ell^+(T_N,i)$ hits 0 when i is to the left of 0. Secondly, we have to show that even when $\ell^\pm(T_N,i)$ is close to 0, the local times do not stray too far away from the coupling. For any $N \in \mathbb{N}$ , we define $I^+=\inf\{i \geq \chi(N)\,|\,\ell^{-}(T_N,i)=0\}$ and $I^{-}=\sup\{i < \chi(N)\,|\,\ell^+(T_N,i)=0\}$ . We notice that $\ell^+(T_N,I^{-}) = 0$ , and from the definition of $T_N$ we have $\ell^+(T_N,i) > 0$ for any $0 \leq i \leq \chi(N)-1$ ; hence $I^{-}<0$ . We first state an elementary result that we will use many times in this work.

Lemma 6. For any $i \geq I^+$ or $i \leq I^{-}$ we have $\ell^\pm(T_N,i)=0$ .

Proof. Since $\ell^+(T_N,I^{-}) = 0$ and the random walk is at $\lfloor Nx \rfloor \iota1>0$ at time $T_N$ , the random walk did not reach $I^{-}$ before time $T_N$ ; thus $\ell^\pm(T_N,i)=0$ for any $i \leq I^{-}$ . Moreover, $\ell^{-}(T_N,\chi(N)) > 0$ by definition of $T_N$ , so $I^+>\chi(N)$ ; thus $X_{T_N} < I^+$ , and hence $\ell^{-}(T_N,I^+) = 0$ implies that the random walk did not reach $I^+$ before time $T_N$ . Thus $\ell^\pm(T_N,i)=0$ for any $i \geq I^+$ .

We will also need the auxiliary random variables ${\tilde{I}}^+=\inf\{i \geq \chi(N)\,|\,\ell^{-}(T_N,i)\leq\big\lfloor N^{1/6}\big\rfloor\}$ and ${\tilde{I}}^{-}=\sup\big\{i < \chi(N)\,|\,\ell^+(T_N,i)\leq\big\lfloor N^{1/6}\big\rfloor\big\}$ .

3.1. Place where we hit 0

We have the following result on the control of $I^+$ and $I^{-}$ .

Lemma 7. For any $\delta>0$ , $\mathbb{P}\big(|I^{-}+(|x|+2\theta)N| \geq N^{\delta+1/2}\big)$ and $\mathbb{P}(|I^+-(|x|+2\theta)N| \geq N^{\delta+1/2})$ tend to 0 when N tends to $+\infty$ .

Proof. The idea is to control the fluctuations of the local times around their deterministic limit: as long as $\ell^\pm(T_N,i)$ is large, the $\ell^\pm(T_N,i+1)-\ell^\pm(T_N,i)$ will be close to the i.i.d. random variables of the coupling, so the fluctuations of $\ell^\pm(T_N,i)$ around its deterministic limit are bounded and $\ell^\pm(T_N,i)$ can be small only when the deterministic limit is small, that is, around $-(|x|+2\theta)N$ and $(|x|+2\theta)N$ . We spell out the proof only for $I^{-}$ , as the argument for $I^+$ is similar.

The fact that $\mathbb{P}\big(I^{-}+(|x|+2\theta)N \leq -N^{\delta+1/2}\big)$ tends to 0 when N tends to $+\infty$ comes from the inequalities (51) and (53) in [Reference Tóth and Vető17], so we only have to prove that $\mathbb{P}(I^{-}+(|x|+2\theta)N \geq N^{\delta+1/2})$ tends to 0 when N tends to $+\infty$ . Since $I^{-} \leq {\tilde{I}}^{-}$ , it is enough to prove that $\mathbb{P}({\tilde{I}}^{-}+(|x|+2\theta)N \geq N^{\delta+1/2})$ tends to 0 when N tends to $+\infty$ . Since by Lemma 2 we have that $\mathbb{P}\big(\mathcal{B}_1^{-}\big)$ tends to 0 when N tends to $+\infty$ , it is enough to prove that $\mathbb{P}({\tilde{I}}^{-}+(|x|+2\theta)N \geq N^{\delta+1/2},\big(\mathcal{B}_1^{-}\big)^c)$ tends to 0 when N tends to $+\infty$ .

We now assume N is large enough, ${\tilde{I}}^{-}+(|x|+2\theta)N \geq N^{\delta+1/2}$ , and $\big(\mathcal{B}_1^{-}\big)^c$ . Then there exists $i\in\{\lceil-(|x|+2\theta)N+N^{\delta+1/2}\rceil,\ldots,\chi(N)-1\}$ such that $\ell^+(T_N,i) \leq \big\lfloor N^{1/6}\big\rfloor$ and $\ell^+(T_N,j) > \big\lfloor N^{1/6}\big\rfloor$ for all $j \in\{i+1,\ldots,\chi(N)-1\}$ . Thus, by (1) we get

\begin{equation*}\lfloor N \theta\rfloor+\sum_{j=i+1}^{\chi(N)-1}\big(\eta_{j,-}(\ell^+(T_N,j))+\mathbb{1}_{\{j > 0\}}\big)=\ell^+(T_N,i) \leq \big\lfloor N^{1/6}\big\rfloor.\end{equation*}

Furthermore, for all $j \in\{i+1,\ldots,\chi(N)-1\}$ , since $\big(\mathcal{B}_1^{-}\big)^c$ occurs and $\ell^+(T_N,j) > \big\lfloor N^{1/6}\big\rfloor$ , we have $\eta_{j,-}(\ell^+(T_N,j))+1/2=\zeta_j$ . We deduce that

\begin{equation*}\lfloor N \theta\rfloor + \sum_{j=i+1}^{\chi(N)-1} \left(\zeta_j+\frac{\mathbb{1}_{\{j > 0\}}-\mathbb{1}_{\{j \leq 0\}}}{2}\right) \leq \lfloor N^{1/6}\rfloor;\end{equation*}

thus

\begin{equation*}\sum_{j=i+1}^{\chi(N)-1}\zeta_j+\lfloor N \theta\rfloor+\sum_{j=i+1}^{\chi(N)-1}\frac{\mathbb{1}_{\{j > 0\}}-\mathbb{1}_{\{j \leq 0\}}}{2}\leq \lfloor N^{1/6}\rfloor.\end{equation*}

Moreover, since $i \in\{\lceil-(|x|+2\theta)N+N^{\delta+1/2}\rceil,\ldots,\chi(N)-1\}$ , we have

\begin{align*}\sum_{j=i+1}^{\chi(N)-1}\frac{\mathbb{1}_{\{j > 0\}}-\mathbb{1}_{\{j \leq 0\}}}{2} &=\frac{1}{2}(\chi(N)-1+i) \\&\geq \frac{1}{2}\big(Nx-2-(|x|+2\theta)N+N^{\delta+1/2}\big)\\&=-\theta N+\frac{1}{2}N^{\delta+1/2}-1.\end{align*}

This yields $\sum_{j=i+1}^{\chi(N)-1}\zeta_j+\lfloor N \theta\rfloor-\theta N+\frac{1}{2}N^{\delta+1/2}-1 \leq \big\lfloor N^{1/6}\big\rfloor$ ; hence

\begin{equation*}\sum_{j=i+1}^{\chi(N)-1}\zeta_j \leq -\frac{1}{2}N^{\delta+1/2}+\big\lfloor N^{1/6}\big\rfloor+2 \leq -N^{(1+\delta)/2}\end{equation*}

since N is large enough. Consequently, when N is large enough,

\begin{align*}&\mathbb{P}\big({\tilde{I}}^{-}+(|x|+2\theta)N \geq N^{\delta+1/2},\big(\mathcal{B}_1^{-}\big)^c\big) \\& \qquad \leq \mathbb{P} \left(\exists\, i\in\{\lceil-(|x|+2\theta)N+N^{\delta+1/2}\rceil,\ldots,\chi(N)-1\}, \sum_{j=i+1}^{\chi(N)-1}\zeta_j \leq -N^{(1+\delta)/2} \right).\end{align*}

Since the $\zeta_i$ , $i\in\mathbb{Z}$ , are i.i.d., when N is large enough this yields

\begin{equation*}\mathbb{P}\big({\tilde{I}}^{-}+(|x|+2\theta)N \geq N^{\delta+1/2},\big(\mathcal{B}_1^{-}\big)^c\big) \leq \mathbb{P}\left(\max_{0 \leq i_1 \leq i_2 \leq \lceil N^{1+\delta/2}\rceil}\left|\sum_{i=i_1}^{i_2}\zeta_i\right|\geq N^{(1+\delta)/2}\right),\end{equation*}

which tends to 0 when N tends to $+\infty$ by Lemma 4 (applied with $\alpha=1+\delta/2$ and $\varepsilon=\delta/4$ ). This shows that $\mathbb{P}\big(I^{-}+(|x|+2\theta)N \geq N^{\delta+1/2}\big)$ converges to 0 when N tends to $+\infty$ , which completes the proof of Lemma 7.

3.2. Control of low local times

We have to show that even when $\ell^\pm(T_N,i)$ is small, the local times are not too far from the random variables of the coupling. In order to do that, we first prove that the window where $\ell^\pm(T_N,i)$ is small but not zero—that is, between ${\tilde{I}}^+$ and $I^+$ and between $I^{-}$ and ${\tilde{I}}^{-}$ —is small. Afterwards, we will give bounds on what happens inside. We begin by showing the following easy result.

Lemma 8. $\mathbb{P}({\tilde{I}}^{-} \geq 0)$ tends to 0 when $N \to +\infty$ .

Proof. Let N be large enough. If ${\tilde{I}}^{-} \geq 0$ , there exists $i\in\{0,\ldots,\lfloor Nx\rfloor\}$ such that $\ell^+(T_N,i)\leq\big\lfloor N^{1/6}\big\rfloor$ . Since N is large enough, this implies $\ell^+(T_N,i)\leq N\theta/2$ ; therefore

\begin{equation*}\sup_{y \in \mathbb{R}}\left|\frac{1}{N}\ell^+\big(T_N,\lfloor N y\rfloor\big)-\left(\frac{|x|-|y|}{2}+\theta\right)_+ \right| \geq \theta/2.\end{equation*}

Moreover, by [Reference Tóth and Vető17, Theorem 1], $\sup_{y \in \mathbb{R}}\big|\frac{1}{N}\ell^+\big(T_N,\lfloor N y\rfloor\big)-\Big(\frac{|x|-|y|}{2}+\theta\Big)_+\big|$ converges in probability to 0 when N tends to $+\infty$ ; hence we deduce that

\begin{equation*}\mathbb{P}\left(\sup_{y \in \mathbb{R}}\left|\frac{1}{N}\ell^+\big(T_N,\lfloor N y\rfloor\big)-\left(\frac{|x|-|y|}{2}+\theta\right)_+ \right| \geq \theta/2\right)\end{equation*}

tends to 0 when $N \to +\infty$ . Therefore $\mathbb{P}({\tilde{I}}^{-} \geq 0)$ tends to 0 when $N \to +\infty$ .

In order to control $I^+$ , $I^{-}$ , ${\tilde{I}}^+$ , and ${\tilde{I}}^{-}$ , we will use the fact that the local times behave as the Markov chain L from [Reference Tóth and Vető17], defined as follows. We consider i.i.d. copies of the Markov chain $\eta$ starting at 0, called $(\eta_m)_{m\in\mathbb{N}}$ . For any $m\in\mathbb{N}$ , we then set $L(m+1)=L(m)+\eta_{m}(L(m))$ . We let $\tau=\inf\{m\in\mathbb{N}\,|\,L(m) \leq 0\}$ . The following was proven in [Reference Tóth and Vető17].

Lemma 9. ([Reference Tóth and Vető17, Lemma 2].) There exists a constant $K < +\infty$ such that for any $k\in \mathbb{N}$ we have $\mathbb{E}(\tau|L(0)=k) \leq 3k+K$ .

Since the local times will behave as L, Lemma 9 implies that if the local time starts out small, then the time at which it reaches 0 has small expectation and hence is not too large. This will help us to prove the following control on the window where $\ell^\pm(T_N,i)$ is small but not zero.

Lemma 10. $\mathbb{P}\big(I^+-{\tilde{I}}^+ \geq N^{1/4}\big)$ and $\mathbb{P}\big({\tilde{I}}^{-}-I^{-} \geq N^{1/4}\big)$ tend to 0 when $N \to +\infty$ .

Proof. Let N be large enough. We deal only with $\mathbb{P}\big({\tilde{I}}^{-}-I^{-} \geq N^{1/4}\big)$ , since $\mathbb{P}\big(I^+-{\tilde{I}}^+ \geq N^{1/4}\big)$ can be dealt with in the same way and with simpler arguments. Thanks to Lemma 8, it is enough to prove that $\mathbb{P}\big({\tilde{I}}^{-}-I^{-} \geq N^{1/4},{\tilde{I}}^{-}<0\big)$ tends to 0 when $N \to +\infty$ . Moreover, if ${\tilde{I}}^{-} < 0$ , thanks to (1), for any $i < {\tilde{I}}^{-}$ we get $\ell^+(T_N,i)=\ell^+(T_N,{\tilde{I}}^{-})+\sum_{j=i+1}^{{\tilde{I}}^{-}}\eta_{j,-}(\ell^+(T_N,j))$ , which allows us to prove that $(\ell^+(T_N,{\tilde{I}}^{-}-i))_{i \in \mathbb{N}}$ is a Markov chain with the transition probabilities of L. Therefore, recalling the notation just before Lemma 9, we have

\begin{equation*} \mathbb{P}\left({\tilde{I}}^{-}-I^{-} \geq N^{1/4},{\tilde{I}}^{-}<0\right) \end{equation*}
\begin{equation*} =\sum_{k=0}^{\lfloor N^{1/6} \rfloor}\mathbb{P}\left({\tilde{I}}^{-}-I^{-} \geq N^{1/4},{\tilde{I}}^{-}<0\left|\ell^+(T_N,{\tilde{I}}^{-})=k\right.\right)\mathbb{P}\left(\ell^+(T_N,{\tilde{I}}^{-})=k\right) \end{equation*}
\begin{equation*} =\sum_{k=0}^{\lfloor N^{1/6} \rfloor}\mathbb{P}\left(\left.\tau \geq N^{1/4}\right|L(0)=k\right)\mathbb{P}\left(\ell^+(T_N,{\tilde{I}}^{-})=k\right) \end{equation*}
\begin{equation*} \leq \sum_{k=0}^{\lfloor N^{1/6} \rfloor}\frac{1}{N^{1/4}}\mathbb{E}(\tau|L(0)=k)\mathbb{P}(\ell^+(T_N,{\tilde{I}}^{-})=k).\end{equation*}

By Lemma 9 we deduce that

\begin{equation*} \mathbb{P}\left({\tilde{I}}^{-}-I^{-} \geq N^{1/4},{\tilde{I}}^{-}<0\right) \leq \frac{1}{N^{1/4}}\sum_{k=0}^{\lfloor N^{1/6} \rfloor}(3k+K)\mathbb{P}(\ell^+(T_N,{\tilde{I}}^{-})=k) \end{equation*}
\begin{equation*} \leq \frac{3N^{1/6}+K}{N^{1/4}} \leq 4 N^{-1/12}, \end{equation*}

since N is large enough; hence $\mathbb{P}({\tilde{I}}^{-}-I^{-} \geq N^{1/4},{\tilde{I}}^{-}<0)$ tends to 0 when $N \to +\infty$ , which completes the proof.

We are now going to prove that even when $\ell^\pm(T_N,i)$ is small, the local times are not too far from the random variables of the coupling. More precisely, for any $n\in\mathbb{N}$ , we define the following events:

\begin{equation*} \mathcal{B}_4^{-}=\left\{\exists \, i \in \{I^{-},\ldots,\chi(N)-1\},\left|\sum_{j=i+1}^{\chi(N)-1}\big(\eta_{j,-}\big(\ell^+\big(T_N,j\big)\big)+1/2\big)-\sum_{j=i+1}^{\chi(N)-1}\zeta_j\right| \geq N^{1/3} \right\},\end{equation*}
\begin{equation*} \mathcal{B}_4^+=\left\{\exists \, i \in \{\chi(N),\ldots,I^+\},\left|\sum_{j=\chi(N)}^{i-1}(\eta_{j,+}(\ell^{-}(T_N,j))+1/2)-\sum_{j=\chi(N)}^{i-1}\zeta_j\right| \geq N^{1/3} \right\}.\end{equation*}

Lemma 11. $\mathbb{P}\big(\mathcal{B}_4^{-}\big)$ and $\mathbb{P}\big(\mathcal{B}_4^+\big)$ tend to 0 when N tend to $+\infty$ .

Proof. The idea of the argument is that when $\ell^\pm(T_N,i)$ is large, $\eta_{i,\mp}(\ell^\pm(T_N,i))+1/2=\zeta_i$ thanks to Lemma 2; that the window where $\ell^\pm(T_N,i)$ is small is bounded by Lemma 10; and that inside this window the $\eta_{i,\mp}(\ell^\pm(T_N,i))+1/2$ , $\zeta_i$ are also bounded by Lemma 3. We spell out the proof only for $\mathbb{P}\big(\mathcal{B}_4^{-}\big)$ , since the proof for $\mathbb{P}\big(\mathcal{B}_4^+\big)$ is the same. By Lemma 7, we have that $\mathbb{P}(I^{-} \leq -2(|x|+\theta)N)$ tends to 0 when N tends to $+\infty$ . Furthermore, Lemma 10 implies that $\mathbb{P}\big({\tilde{I}}^{-}-I^{-} \geq N^{1/4}\big)$ tends to 0 when N tends to $+\infty$ . In addition, by Lemmas 2 and 3 we have that $\mathbb{P}\big(\mathcal{B}_1^{-}\big)$ and $\mathbb{P}(\mathcal{B}_2)$ tend to 0 when N tends to $+\infty$ . Consequently, it is enough to prove that for N large enough, if $\big(\mathcal{B}_1^{-}\big)^c$ and $(\mathcal{B}_2)^c$ occur, if ${\tilde{I}}^{-}-I^{-} < N^{1/4}$ , and if $I^{-} > -2(|x|+\theta)N$ , then $\big(\mathcal{B}_4^{-}\big)^c$ occurs. We assume $\big(\mathcal{B}_1^{-}\big)^c$ , $(\mathcal{B}_2)^c$ , ${\tilde{I}}^{-}-I^{-} < N^{1/4}$ , and $I^{-} > -2(|x|+\theta)N$ . Since $\big(\mathcal{B}_1^{-}\big)^c$ occurs and ${\tilde{I}}^{-} \geq I^{-} > -2(|x|+\theta)N$ , we get $\zeta_i=\eta_{j,-}(\ell^+(T_N,j))+1/2$ for any $i\in\{{\tilde{I}}^{-}+1,\ldots,\chi(N)-1\}$ . Therefore, if $i\in\{{\tilde{I}}^{-},\ldots,\chi(N)-1\}$ we get

\begin{equation*}\sum_{j=i+1}^{\chi(N)-1}\big(\eta_{j,-}\big(\ell^+\big(T_N,j\big)\big)+1/2\big)-\sum_{j=i+1}^{\chi(N)-1}\zeta_j=0,\end{equation*}

and for $i \in \{I^{-},\ldots,{\tilde{I}}^{-}-1\}$ we have

\begin{equation*} \left|\sum_{j=i+1}^{\chi(N)-1}\big(\eta_{j,-}\big(\ell^+\big(T_N,j\big)\big)+1/2\big)-\sum_{j=i+1}^{\chi(N)-1}\zeta_j\right| = \left|\sum_{j=i+1}^{{\tilde{I}}^{-}}\big(\eta_{j,-}\big(\ell^+\big(T_N,j\big)\big)+1/2\big)-\sum_{j=i+1}^{{\tilde{I}}^{-}}\zeta_j\right| \end{equation*}
\begin{equation*} \leq \sum_{j=i+1}^{{\tilde{I}}^{-}}\left(|\eta_{j,-}(\ell^+(T_N,j))+1/2|+|\zeta_j|\right) \leq 2({\tilde{I}}^{-} -I^{-})N^{1/16},\end{equation*}

since $\big(\mathcal{B}_2^{-}\big)^c$ occurs, $i+1 \geq I^{-} > -2(|x|+\theta)N$ , and by definition ${\tilde{I}}^{-} \leq \chi(N)-1 \leq 2(|x|+\theta)N$ . Moreover, we assumed ${\tilde{I}}^{-}-I^{-} < N^{1/4}$ , which implies

\begin{equation*}\left|\sum_{j=i+1}^{\chi(N)-1}\big(\eta_{j,-}\big(\ell^+\big(T_N,j\big)\big)+1/2\big)-\sum_{j=i+1}^{\chi(N)-1}\zeta_j \right| \leq 2 N^{1/4}N^{1/16}=2N^{5/16} < N^{1/3}\end{equation*}

when N is large enough. Consequently, for any $i\in\{I^{-},\ldots,\chi(N)-1\}$ we have

\begin{equation*}\left|\sum_{j=i+1}^{\chi(N)-1}\big(\eta_{j,-}\big(\ell^+\big(T_N,j\big)\big)+1/2\big)-\sum_{j=i+1}^{\chi(N)-1}\zeta_j \right| < N^{1/3};\end{equation*}

therefore $\big(\mathcal{B}_4^{-}\big)^c$ occurs, which completes the proof.

4. Skorokhod $M_1$ distance

The goal of this section is to prove that when N is large, $Y_N^\pm$ is close in the Skorokhod $M_1$ distance to the function $Y_N$ defined as follows. For any N large enough, for $y\in\mathbb{R}$ , we set

\begin{equation*}Y_N(y)=\frac{1}{\sqrt{N}}\sum_{i=\lfloor N y\rfloor+1}^{\chi(N)-1}\zeta_i\end{equation*}

if $y \in \Big[{-}|x|-2\theta,\frac{\chi(N)}{N}\Big)$ ,

\begin{equation*}Y_N(y)=\frac{1}{\sqrt{N}}\sum_{i=\chi(N)}^{\lfloor N y\rfloor-1}\zeta_i\end{equation*}

if $y \in \Big[\frac{\chi(N)}{N},|x|+2\theta\Big)$ , and $Y_N(y)=0$ otherwise. We want to prove the following proposition.

Proposition 5. $\mathbb{P}\big(d_{M_1}\big(Y_N^\pm,Y_N\big) > 3N^{-1/12}\big)$ tends to 0 when N tends to $+\infty$ .

If we denote by $\mathcal{B}$ the event

\begin{align*} & \mathcal{B}_1^{-} \cup \mathcal{B}_1^+ \cup \mathcal{B}_2 \cup \mathcal{B}_3^{-} \cup \mathcal{B}_3^+ \cup \mathcal{B}_4^{-} \cup \mathcal{B}_4^+ \cup \big\{|I^{-}+(|x|+2\theta)N| \geq N^{3/4}\big\}\\& \cup \big\{|I^+-(|x|+2\theta)N| \geq N^{3/4}\big\},\end{align*}

it will be enough to prove the following proposition.

Proposition 6. When N is large enough, for all $a >0$ with $|(|x|+2\theta)-a|>N^{-1/8}$ , we have that $\mathcal{B}^c \subset \big\{d_{M_1,a}\big(Y_N^\pm|_{[{-}a,a]},Y_N|_{[{-}a,a]}\big) \leq 2N^{-1/12}\big\}$ .

Proof of Proposition 5 given Proposition 6. We assume Proposition 6 holds. Then, when N is large enough, if $\mathcal{B}^c$ occurs, for all $a >0$ with $|(|x|+2\theta)-a|>N^{-1/8}$ we have $d_{M_1,a}\big(Y_N^\pm|_{[{-}a,a]},Y_N|_{[{-}a,a]}\big) \leq 2N^{-1/12}$ , which yields that

\begin{align*}d_{M_1}\big(Y_N^\pm,Y_N\big)&=\int_0^{+\infty}e^{-a}\big(d_{M_1,a}\big(Y_N^\pm|_{[{-}a,a]},Y_N|_{[{-}a,a]}\big) \wedge 1 \big)\mathrm{d}a \\&\leq \int_0^{+\infty}e^{-a}2N^{-1/12}\mathrm{d}a + 2 N^{-1/8}\\&= 2N^{-1/12}+ 2 N^{-1/8} \leq 3 N^{-1/12}.\end{align*}

This implies that $\mathbb{P}\big(d_{M_1}\big(Y_N^\pm,Y_N\big) > 3 N^{-1/12}\big) \leq \mathbb{P}(\mathcal{B})$ when N is large enough. In addition,

\begin{align*} \mathbb{P}(\mathcal{B}) & \leq \mathbb{P}\big(\mathcal{B}_1^{-}\big) +\mathbb{P}\big(\mathcal{B}_1^+\big) + \mathbb{P}(\mathcal{B}_2) + \mathbb{P}\big(\mathcal{B}_3^{-}\big) + \mathbb{P}\big(\mathcal{B}_3^+\big) + \mathbb{P}\big(\mathcal{B}_4^{-}\big) + \mathbb{P}\big(\mathcal{B}_4^+\big) \\ & \quad +\mathbb{P}\big(|I^{-}+(|x|+2\theta)N| \geq N^{3/4}\big) + \mathbb{P}\big(|I^+-(|x|+2\theta)N| \geq N^{3/4}\big).\end{align*}

Applying Lemmas 2, 3, 5, 7, and 11 implies that $\mathbb{P}(\mathcal{B})$ tends to 0 when N tends to $+\infty$ ; hence $\mathbb{P}\big(d_{M_1}\big(Y_N^\pm,Y_N\big) > 3 N^{-1/12}\big)$ tends to 0 when N tends to $+\infty$ , which is Proposition 5.

The remainder of this section is devoted to the proof of Proposition 6. The first thing we do is show that between $\frac{({-}(|x|+2\theta)N) \vee I^{-}}{N}$ and $\frac{((|x|+2\theta)N) \wedge I^+}{N}$ , the functions $Y_N^\pm$ and $Y_N$ are close in uniform distance, which is the following lemma.

Lemma 12. When N is large enough, if $(\mathcal{B}_2)^c$ , $\big(\mathcal{B}_4^{-}\big)^c$ , and $\big(\mathcal{B}_4^+\big)^c$ occur, we have the following: if $I^+ < (|x|+2\theta)N$ , then for any

\begin{equation*}y\in \left[\frac{({-}(|x|+2\theta)N) \vee I^{-}}{N},\frac{((|x|+2\theta)N) \wedge I^+}{N}\right]\end{equation*}

we have $\big|Y_N^\pm(y)-Y_N(y)\big| \leq N^{-1/12}$ . If $I^+ \geq (|x|+2\theta)N$ , then we have $\big|Y_N^\pm(y)-Y_N(y)\big| \leq N^{-1/12}$ for $y\in \left[\frac{({-}(|x|+2\theta)N) \vee I^{-}}{N},\frac{((|x|+2\theta)N) \wedge I^+}{N} \right)$ .

Proof of Lemma 12. Writing down the proof is only a technical matter, as the meaning of $\big(\mathcal{B}_4^\pm\big)^c$ is that the local times are close to the process formed from the random variables of the coupling. The event $(\mathcal{B}_2)^c$ is there to ensure that the difference terms that appear will be small. We spell out the proof only for $Y^{-}$ , as the proof for $Y^+$ is similar. We assume $(\mathcal{B}_2)^c$ , $\big(\mathcal{B}_4^{-}\big)^c$ , and $\big(\mathcal{B}_4^+\big)^c$ . Then if

\begin{equation*}y \in \left[\frac{\chi(N)}{N},\frac{((|x|+2\theta)N) \wedge I^+}{N} \right]\end{equation*}

$\big($ if $I^+\geq(|x|+2\theta)N$ we exclude the case $y=\frac{((|x|+2\theta)N) \wedge I^+}{N}\big)$ , we have $y \in \big[\frac{\chi(N)}{N},|x|+2\theta\big)$ , so

\begin{equation*}\big|Y_N^{-}(y)-Y_N(y)\big| = \frac{1}{\sqrt{N}}\bigg|\ell^{-}\big(T_N,\lfloor N y\rfloor\big)-N\left(\frac{|x|-|y|}{2}+\theta\right)_+-\sum_{i=\chi(N)}^{\lfloor N y\rfloor-1}\zeta_i\bigg|;\end{equation*}

thus by (1) we obtain that $\big|Y_N^{-}(y)-Y_N(y)\big|$ is equal to

\begin{equation*} \frac{1}{\sqrt{N}}\left|\lfloor N \theta\rfloor-\mathbb{1}_{\{\iota=+\}}+\sum_{i=\chi(N)}^{\lfloor N y\rfloor-1}\eta_{i,+}\big(\ell^{-}\big(T_N,i\big)\big)-N\left(\frac{|x|-|y|}{2}+\theta\right)_+-\sum_{i=\chi(N)}^{\lfloor N y\rfloor-1}\zeta_i\right| \end{equation*}
\begin{equation*} \leq \frac{1}{\sqrt{N}}\left|\sum_{i=\chi(N)}^{\lfloor N y\rfloor-1}\eta_{i,+}\big(\ell^{-}\big(T_N,i\big)\big)+\frac{\lfloor N y\rfloor-\chi(N)}{2}-\sum_{i=\chi(N)}^{\lfloor N y\rfloor-1}\zeta_i\right|+\frac{3}{\sqrt{N}} \end{equation*}
\begin{equation*} = \frac{1}{\sqrt{N}}\left|\sum_{i=\chi(N)}^{\lfloor N y\rfloor-1}\big(\eta_{i,+}\big(\ell^{-}\big(T_N,i\big)\big)+1/2\big)-\sum_{i=\chi(N)}^{\lfloor N y\rfloor-1}\zeta_i\right|+\frac{3}{\sqrt{N}}. \end{equation*}

Now, $y \in \big[\frac{\chi(N)}{N},\frac{((|x|+2\theta)N) \wedge I^+}{N}\big]$ implies $\lfloor N y\rfloor \in \{\chi(N),\ldots,I^+\}$ ; thus $\big(\mathcal{B}_4^+\big)^c$ yields $\big|Y_N^{-}(y)-Y_N(y)\big| \leq \frac{1}{\sqrt{N}}N^{1/3}+\frac{3}{\sqrt{N}} \leq N^{-1/12}$ when N is large enough. We now consider the case $y \in \big[\frac{({-}(|x|+2\theta)N) \vee I^{-}}{N},\frac{\chi(N)}{N}\big)$ . Then $y \in \big[{-}|x|-2\theta,\frac{\chi(N)}{N}\big)$ , and hence

\begin{equation*}\big|Y_N^{-}(y)-Y_N(y)\big| = \frac{1}{\sqrt{N}} \left|\ell^{-}\big(T_N,\lfloor N y\rfloor\big)-N\bigg(\frac{|x|-|y|}{2}+\theta\bigg)_+ -\sum_{i=\lfloor N y\rfloor+1}^{\chi(N)-1}\zeta_i \right|.\end{equation*}

Now, (2) yields $|\ell^{-}\big(T_N,\lfloor N y\rfloor\big)-\ell^+\big(T_N,\lfloor N y\rfloor\big)|=|\eta_{\lfloor N y\rfloor,-}\big(\ell^+\big(T_N,\lfloor N y\rfloor\big)\big)|$ , which is smaller than $N^{1/16}+1/2$ thanks to $(\mathcal{B}_2)^c$ . We deduce that

\begin{equation*}\big|Y_N^{-}(y)-Y_N(y)\big| \leq \frac{1}{\sqrt{N}}\left|\ell^+\big(T_N,\lfloor N y\rfloor\big)-N\bigg(\frac{|x|-|y|}{2}+\theta\bigg)_+ -\sum_{i=\lfloor N y\rfloor+1}^{\chi(N)-1}\zeta_i \right|+\frac{N^{1/16}+1/2}{\sqrt{N}};\end{equation*}

thus (1) implies that $\big|Y_N^{-}(y)-Y_N(y)\big|$ is smaller than

\begin{equation*} \frac{1}{\sqrt{N}}\left|\lfloor N \theta\rfloor+\sum_{i=\lfloor N y\rfloor+1}^{\chi(N)-1}\big(\eta_{i,-}\big(\ell^+\big(T_N,i\big)\big)+\mathbb{1}_{\{i > 0\}}\big)\qquad\qquad\qquad\qquad\qquad\qquad\quad\right. \end{equation*}
\begin{equation*} \qquad\qquad\qquad\qquad\qquad\qquad\left.-N\bigg(\frac{|x|-|y|}{2}+\theta\bigg)_+-\sum_{i=\lfloor N y\rfloor+1}^{\chi(N)-1}\zeta_i\right|+\frac{N^{1/16}+1/2}{\sqrt{N}} \end{equation*}
\begin{equation*} \leq \frac{1}{\sqrt{N}}\left|\sum_{i=\lfloor N y\rfloor+1}^{\chi(N)-1}\big(\eta_{i,-}\big(\ell^+\big(T_N,i\big)\big)+\mathbb{1}_{\{i > 0\}}\big)+\frac{\lfloor N y\rfloor+1-\chi(N)}{2}-\sum_{i=\lfloor N y\rfloor+1}^{\chi(N)-1}\zeta_i\right|+\frac{N^{1/16}+3}{\sqrt{N}} \end{equation*}
\begin{equation*} \leq \frac{1}{\sqrt{N}}\left|\sum_{i=\lfloor N y\rfloor+1}^{\chi(N)-1}\big(\eta_{i,-}\big(\ell^+\big(T_N,i\big)\big)+1/2\big)-\sum_{i=\lfloor N y\rfloor+1}^{\chi(N)-1}\zeta_i\right|+\frac{N^{1/16}+3}{\sqrt{N}}. \end{equation*}

Furthermore, $y \in \Big[\frac{({-}(|x|+2\theta)N) \vee I^{-}}{N},\frac{\chi(N)}{N}\Big)$ implies $\lfloor N y\rfloor \in \{I^{-},\ldots,\chi(N)-1\}$ ; hence $\big(\mathcal{B}_4^{-}\big)^c$ yields

\begin{equation*}\big|Y_N^{-}(y)-Y_N(y)\big| \leq \frac{1}{\sqrt{N}}N^{1/3}+\frac{N^{1/16}+3}{\sqrt{N}} \leq N^{-1/12}\end{equation*}

when N is large enough. Consequently, for any $y\in\big[\frac{({-}(|x|+2\theta)N) \vee I^{-}}{N},\frac{((|x|+2\theta)N) \wedge I^+}{N}\big]$ we have $\big|Y_N^{-}(y)-Y_N(y)\big| \leq N^{-1/12}$ , which completes the proof of Lemma 12.

We now prove Proposition 6. Let $a>0$ be such that $|(|x|+2\theta)-a|>N^{-1/8}$ . We will prove that when N is large enough, $\mathcal{B}^c \subset \big\{d_{M_1,a}\big(Y_N^\pm|_{[{-}a,a]},Y_N|_{[{-}a,a]}\big) \leq 2N^{-1/12}\big\}$ , and the threshold for N given by the proof will not depend on the value of a. There will be two cases depending on whether a is smaller than $|x|+2\theta$ or not.

4.1. Case a ∈ (0, |x| + 2θ − N−1/8)

This is the easier case. Indeed, the interval $[{-}a,a]$ will then be contained in $\Big[\frac{({-}(|x|+2\theta)N) \vee I^{-}}{N},\frac{((|x|+2\theta)N) \wedge I^+}{N}\Big)$ , inside which $Y_N^\pm$ and $Y_N$ are close for the uniform norm by Lemma 12. We may then define parametric representations $\big(u_N^\pm,r_N^\pm\big)$ and $(u_N,r_N)$ of $Y_N^{-}|_{[{-}a,a]}$ and $Y_N|_{[{-}a,a]}$ ‘following the graphs of $Y_N^\pm|_{[{-}a,a]}$ and $Y_N|_{[{-}a,a]}$ together’ so that $u_N^\pm(t)=u_N(t)$ for all $t\in[0,1]$ , and

\begin{equation*}\big\|r_N^\pm-r_N\big\|_\infty \leq \sup_{y\in[{-}a,a]} \left|Y_N^\pm(y)-Y_N(y)\right|\end{equation*}

(an explicit construction of these representations can be found in the first arXiv version of this paper [Reference Marêché5]). We deduce that

\begin{equation*}d_{M_1,a}\big(Y_N^\pm|_{[{-}a,a]},Y_N|_{[{-}a,a]}\big) \leq \sup_{y\in[{-}a,a]}\big|Y_N^\pm(y)-Y_N(y)\big|.\end{equation*}

Moreover, if $\mathcal{B}^c$ occurs, since $a\in\big(0,|x|+2\theta-N^{-1/8}\big)$ , for any $y \in [{-}a,a]$ we have $y \in \big({-}|x|-2\theta+N^{-1/8},|x|+2\theta-N^{-1/8}\big)$ ; thus $-(|x|+2\theta)N+N^{3/4} \leq N y \leq (|x|+2\theta)N-N^{3/4}$ , which implies $I^{-} < N y < I^+$ , and hence $y\in\big(\frac{({-}(|x|+2\theta)N) \vee I^{-}}{N},\frac{((|x|+2\theta)N) \wedge I^+}{N}\big)$ . So by Lemma 12 we have $\big|Y_N^\pm(y)-Y_N(y)\big| \leq N^{-1/12}$ . Consequently, if $\mathcal{B}^c$ occurs, then $d_{M_1,a}\big(Y_N^\pm|_{[{-}a,a]},Y_N|_{[{-}a,a]}\big) \leq N^{-1/12}$ .

4.2. Case a > |x| + 2θ + N−1/8

This is the harder case, as we have to deal with what happens around $|x|+2\theta$ and $-|x|-2\theta$ . We write down only the proof for $Y_N^{-}$ , since the proof for $Y_N^+$ is similar (one may remember that (2) allows us to bound the $\ell^{-}(T_N,i)-\ell^+(T_N,i)$ when $(\mathcal{B}_2)^c$ occurs, and hence when $\mathcal{B}^c$ occurs). Once again, we will define parametric representations $\big(u_N^{-},r_N^{-}\big)$ and $(u_N,r_N)$ of $Y_N^{-}|_{[{-}a,a]}$ and $Y_N|_{[{-}a,a]}$ . The definition will depend on whether $I^+ \leq \lfloor(|x|+2\theta)N\rfloor$ or not, and also on whether $I^{-} \geq -\lfloor(|x|+2\theta)N\rfloor$ or not. We explain it for abscissas in [0,a] depending on whether $I^+ \leq \lfloor(|x|+2\theta)N\rfloor$ or not; the constructions for abscissas in $[{-}a,0]$ are similar, depending on whether $I^{-} \geq -\lfloor(|x|+2\theta)N\rfloor$ or not.

We first assume $I^+ \leq \lfloor(|x|+2\theta)N\rfloor$ . Between 0 and $\frac{I^+}{N}$ , as in the case $a\in(0,|x|+2\theta-N^{-1/8})$ , the parametric representations will follow the completed graphs of $Y_N^{-}$ and $Y_N$ in parallel (see Figure 1(a)). The next step, once $\big(u_N^{-},r_N^{-}\big)$ has reached $\left(\frac{I^+}{N},Y_N^{-}\left(\frac{I^+}{N} \right)\right)$ , is to freeze it there while $(u_N,r_N)$ follows the graph of $Y_N$ from $\left(\frac{I^+}{N},Y_N\left(\frac{I^+}{N} \right)\right)$ to $(|x|+2\theta,Y_N((|x|+2\theta))^{-})$ (see Figure 1(b)). For $y \geq \frac{I^+}{N}$ we have $\ell^{-}\big(T_N,\lfloor N y\rfloor\big)=0$ (see Lemma 6); thus $Y_N(y)=-N\Big(\frac{|x|-|y|}{2}+\theta\Big)_+$ , and hence $Y_N^{-}\,:\,[\frac{I^+}{N},|x|+2\theta] \mapsto \mathbb{R}$ is affine. Therefore, the following step is to simultaneously move $\big(u_N^{-},r_N^{-}\big)$ from $\left(\frac{I^+}{N},Y_N^{-}\left(\frac{I^+}{N} \right)\right)$ to $(|x|+2\theta,Y_N^{-}(|x|+2\theta))=(|x|+2\theta,0)$ and $(u_N,r_N)$ from $(|x|+2\theta,Y_N((|x|+2\theta)^{-}))$ to $(|x|+2\theta,0)$ (see Figure 1(c)); here the two parametric representations will remain close. After this step, both parametric representations are at $(|x|+2\theta,0)$ , and they will go together to (a,0) (see Figure 1(d)).

Figure 1. The successive steps of the parametric representations of $Y_N^{-}|_{[{-}a,a]}$ and $Y_N|_{[{-}a,a]}$ if $I^+ \leq \lfloor(|x|+2\theta)N\rfloor$ . At each step, the parts of the graphs through which the parametric representations travel are thickened.

We now assume $I^+ > \lfloor(|x|+2\theta)N\rfloor$ . We also assume $\frac{I^+}{N} \leq a$ $\big($ if $\frac{I^+}{N} > a$ , we may choose anything for $\big(u_N^{-},r_N^{-}\big)$ , $(u_N,r_N)$ ; this will not happen if $\mathcal{B}^c$ occurs $\big)$ . Between 0 and $|x|+2\theta$ , the parametric representations will follow the completed graphs of $Y_N^{-}$ and $Y_N$ in parallel (see Figure 2(a)). Once the abscissa $|x|+2\theta$ is reached, the next step is to move $\big(u_N^{-},r_N^{-}\big)$ from $(|x|+2\theta,Y_N^{-}(|x|+2\theta))$ to $\left(\frac{I^+}{N},Y_N^{-}\left(\frac{I^+}{N} \right)\right)$ , which is $\big(\frac{I^+}{N},0\big)$ , and at the same time to move