Hostname: page-component-788cddb947-t9bwh Total loading time: 0 Render date: 2024-10-11T15:09:24.760Z Has data issue: false hasContentIssue false

Collapse and diffusion in harmonic activation and transport

Published online by Cambridge University Press:  27 September 2023

Jacob Calvert
Affiliation:
Department of Statistics, UC Berkeley, Evans Hall, Berkeley, CA, USA, 94720-3840; E-mail: jacob_calvert@berkeley.edu
Shirshendu Ganguly
Affiliation:
Department of Statistics, UC Berkeley, Evans Hall, Berkeley, CA, USA, 94720-3840; E-mail: sganguly@berkeley.edu
Alan Hammond
Affiliation:
Departments of Mathematics and Statistics, UC Berkeley, Evans Hall, Berkeley, CA, USA, 94720-3840; E-mail: alanmh@berkeley.edu

Abstract

For an n-element subset U of $\mathbb {Z}^2$, select x from U according to harmonic measure from infinity, remove x from U and start a random walk from x. If the walk leaves from y when it first enters the rest of U, add y to it. Iterating this procedure constitutes the process we call harmonic activation and transport (HAT).

HAT exhibits a phenomenon we refer to as collapse: Informally, the diameter shrinks to its logarithm over a number of steps which is comparable to this logarithm. Collapse implies the existence of the stationary distribution of HAT, where configurations are viewed up to translation, and the exponential tightness of diameter at stationarity. Additionally, collapse produces a renewal structure with which we establish that the center of mass process, properly rescaled, converges in distribution to two-dimensional Brownian motion.

To characterize the phenomenon of collapse, we address fundamental questions about the extremal behavior of harmonic measure and escape probabilities. Among n-element subsets of $\mathbb {Z}^2$, what is the least positive value of harmonic measure? What is the probability of escape from the set to a distance of, say, d? Concerning the former, examples abound for which the harmonic measure is exponentially small in n. We prove that it can be no smaller than exponential in $n \log n$. Regarding the latter, the escape probability is at most the reciprocal of $\log d$, up to a constant factor. We prove it is always at least this much, up to an n-dependent factor.

Type
Probability
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press

1 Introduction

1.1 Harmonic activation and transport

Consider simple random walk $(S_j)_{j \geq 0}$ on $\mathbb {Z}^2$ and with $S_0 = x$ , the distribution of which we denote by $\mathbb {P}_x$ . For a finite, nonempty subset $A \subset \mathbb {Z}^2$ , the hitting distribution of A from $x \in \mathbb {Z}^2$ is the function ${\mathbb {H}}_A (x, \cdot ) : \mathbb {Z}^2 \to [0,1]$ defined as ${\mathbb {H}}_A (x,y) = \mathbb {P}_x (S_{\tau _A} =y)$ , where $\tau _A = \inf \{j \geq 1 : S_j \in A\}$ is the return time to A. (We use the notation $\tau _A$ instead of the more common $\tau _A^+$ for brevity.) The recurrence of random walk on $\mathbb {Z}^2$ guarantees that $\tau _A$ is almost surely finite, and the existence of the limit ${\mathbb {H}}_A (y) = \lim _{|x| \to \infty } {\mathbb {H}}_A (x,y)$ , called the harmonic measure of A, is well known [Reference LawlerLaw13]. Informally, the harmonic measure is the hitting distribution of a random walk ‘from infinity’.

In this paper, we introduce a Markov chain called harmonic activation and transport (HAT), wherein the elements of a subset of $\mathbb {Z}^2$ (respectively styled as ‘particles’ of a ‘configuration’) are iteratively selected according to harmonic measure and replaced according to the hitting distribution of a random walk started from the location of the selected element. We say that, with each step, a particle is ‘activated’ and then ‘transported’.

Definition 1.1 (Harmonic activation and transport).

Given a finite subset $U_0$ of $\mathbb {Z}^2$ with at least two elements, HAT is the discrete-time Markov chain $(U_t)_{t \geq 0}$ on subsets of $\mathbb {Z}^2$ , the dynamics of which consists of the following steps (Figure 1).

  • Activation. At time t, remove a random element $X_t \sim {\mathbb {H}}_{U_t}$ from $U_t$ , forming $V_t = U_t \setminus \{X_t\}$ .

  • Transport. Then, add a random element $Y_t \sim \mathbb {P}_{X_t} (S_{\tau _{V_t} - 1} \in \cdot \mid V_t)$ to $V_t$ , forming $U_{t+1} = V_t \cup \{Y_t\}$ .

In other words, $(U_t)_{t \geq 0}$ has inhomogeneous transition probabilities given by

$$ \begin{align*} \mathbf{P} \left(U_{t+1} = ( U_t{\setminus} \{x\}) \cup \{y\} \bigm\vert U_t \right) = \begin{cases} {\mathbb{H}}_{U_t} (x) \, \mathbb{P}_x \left( S_{\tau_{U_t{\setminus}\{x\}} - 1} = y \bigm\vert U_t\right) & x \neq y,\\ \sum_{z \in U_t} {\mathbb{H}}_{U_t} (z) \, \mathbb{P}_z \left( S_{\tau_{U_t{\setminus}\{z\}} - 1} = z \bigm\vert U_t\right) & x = y. \end{cases} \end{align*} $$

Figure 1 The harmonic activation and transport dynamics. (A) A particle (indicated by a solid, red circle) in the configuration $U_t$ is activated according to harmonic measure. (B) The activated particle (following the solid, red path) hits another particle (indicated by a solid, blue circle); it is then fixed at the site visited during the previous step (indicated by a solid, red circle), giving $U_{t+1}$ . (C) A particle of U (indicated by a red circle) is activated and (D) if it tries to move into $U {\setminus } \{x\}$ , the particle will be placed at x. The notation $\partial U$ refers to the exterior vertex boundary of U.

To guide the presentation of our results, we highlight four features of HAT. Two of these reference the diameter of a configuration, defined as $\mathrm {diam} (U) = \sup _{x,y \in U} |x-y|$ , where $|\cdot |$ is the Euclidean norm.

  • Conservation of mass. HAT conserves the number of particles in the initial configuration.

  • Translation invariance. For any configurations $V, W$ and element $x \in \mathbb {Z}^2$ ,

    $$\begin{align*}\mathbf{P} (U_{t+1} = V \bigm\vert U_t = W) = \mathbf{P} ( U_{t+1} = V + x \bigm\vert U_t = W + x). \end{align*}$$
    In words, the HAT dynamics is invariant under translation by elements of $\mathbb {Z}^2$ . Accordingly, to each configuration U, we can associate an equivalence class
    $$\begin{align*}\widehat U = \left\{ V \subseteq \mathbb{Z}^2: \exists x \in \mathbb{Z}^2: \, U = V + x\right\}.\end{align*}$$
  • Variable connectivity. The HAT dynamics does not preserve connectivity. Indeed, a configuration which is initially connected will eventually be disconnected by the HAT dynamics, and the resulting components may ‘treadmill’ away from one another, adopting configurations of arbitrarily large diameter.

  • Asymmetric behavior of diameter. While the diameter of a configuration can increase by at most $1$ with each step, it can decrease abruptly. For example, if the configuration is a pair of particles separated by d, then the diameter will decrease by $d-1$ in one step.

We will shortly state the existence of the stationary distribution of HAT. By the translation invariance of the HAT dynamics, the stationary distribution will be supported on equivalence classes of configurations which, for brevity, we will simply refer to as configurations. In fact, the HAT dynamics cannot reach all such configurations. By an inductive argument, we will prove that the HAT dynamics is irreducible on the collection of configurations that have a nonisolated element with positive harmonic measure. Figure 2 depicts a configuration that HAT cannot reach because every element with positive harmonic measure has no neighbors in $\mathbb {Z}^2$ .

Figure 2 A configuration that HAT cannot reach.

Definition 1.2. Denote by $\mathrm {{Iso}}(n)$ the collection of n-element subsets U of $\mathbb {Z}^2$ such that every x in U with ${\mathbb {H}}_U (x)> 0$ belongs to a singleton connected component. In other words, all exposed elements of U are isolated: They lack nearest neighbors in U. We will denote the collection of all other n-element subsets of $\mathbb {Z}^2$ by $\mathrm {NonIso} (n)$ and the corresponding equivalence class by

$$\begin{align*}\widehat{\mathrm{N}} \mathrm{onIso} (n) = \big\{ \widehat U: U \in \mathrm{NonIso} (n) \big\}.\end{align*}$$

The variable connectivity of HAT configurations and concomitant opportunity for unchecked diameter growth seem to jeopardize the positive recurrence of the HAT dynamics on $\widehat {\mathrm {N}} \mathrm {onIso} (n)$ . Indeed, if the diameter were to grow unabatedly, the HAT dynamics could not return to a configuration or equivalence class thereof and would therefore be doomed to transience. However, due to the asymmetric behavior of diameter under the HAT dynamics and the recurrence of random walk in $\mathbb {Z}^2$ , this will not be the case. For an arbitrary initial configuration of $n \geq 2$ particles, we will prove – up to a factor depending on n – sharp bounds on the ‘collapse’ time which, informally, is the first time the diameter is at most a certain function of n.

Definition 1.3. For a positive real number R, we define the level-R collapse time to be $\mathcal {T} (R) = \inf \{t \geq 0: \mathrm {diam} (U_t) \leq R\}$ .

For a real number $r \geq 0$ , we define $\theta _m = \theta _m (r)$ through

(1.1) $$ \begin{align} \theta_0 = r \quad\text{and} \quad \theta_m = \theta_{m-1} + e^{\theta_{m - 1}} \,\,\, \text{for}\ m \geq 1. \end{align} $$

In particular, $\theta _n (r)$ is approximately the n th iterated exponential of r.

Theorem 1. Let U be a finite subset of $\mathbb {Z}^2$ with $n \geq 2$ elements and a diameter of d. There exists a universal positive constant c such that, if d exceeds $\theta = \theta _{4n} (cn)$ , then

$$\begin{align*}\mathbf{P}_{U} \left( \mathcal{T} (\theta) \leq (\log d)^{1 + o_n (1)} \right) \geq 1 - e^{-n}.\end{align*}$$

For the sake of concreteness, this is true with $n^{-4}$ in the place of $o_n (1)$ .

In words, for a given n, it typically takes $(\log d)^{1+o_n (1)}$ steps before the configuration of initial diameter d reaches a configuration with a diameter of no more than a large function of n. Here, $o_n (1)$ denotes a nonnegative function of n that is at most $1$ and which tends to zero as n tends to $\infty $ . The d dependence in Theorem 1 is essentially the best possible, aside from the $o_n (1)$ term, because two pairs of particles separated by a distance of d typically exchange particles over $\log d$ steps. We elaborate this point in Section 2.

As a consequence of Theorem 1 and the preceding discussion, it will follow that the HAT dynamics constitutes an aperiodic, irreducible and positive recurrent Markov chain on $\widehat {\mathrm {N}} \mathrm {onIso} (n)$ . In particular, this means that, from any configuration of $\widehat {\mathrm {N}} \mathrm {onIso} (n)$ , the time it takes for the HAT dynamics to return to that configuration is finite in expectation. Aperiodicity, irreducibility and positive recurrence imply the existence and uniqueness of the stationary distribution $\pi _n$ , to which HAT converges from any n-element configuration. Moreover – again, due to Theorem 1 – the stationary distribution is exponentially tight.

Theorem 2. For every $n \geq 2$ , from any n-element subset of $\mathbb {Z}^2$ , HAT converges to a unique probability measure $\pi _n$ supported on $\widehat {\mathrm {N}} \mathrm {onIso} (n)$ . Moreover, $\pi _n$ satisfies the following tightness estimate. There exists a universal positive constant c such that, for any $r \geq 2 \theta _{4n} (c n)$ ,

$$\begin{align*}\pi_{n}\big(\mathrm{{diam}}(\widehat U)\ge r\big)\le \exp \left( - \frac{r}{(\log r)^{1+o_n(1)}} \right).\end{align*}$$

In particular, this is true with $6n^{-4}$ in the place of $o_n (1)$ .

The r dependence in the tail bound of Theorem 2 is likely suboptimal because its proof makes critical use of the fact that the diameter of a configuration increases by at most $1$ with each step. A sufficiently high probability, sublinear bound on the growth rate of diameter would improve the rate of exponential decay. We note that an analogue of Kesten’s sublinear bound on the diameter growth rate of diffusion-limited aggregation [Reference KestenKes87] would apply only to growth resulting from the exchange of particles between well separated ‘clusters’ of particles, not to growth from intracluster transport.

As a further consequence of Theorem 1, we will find that the HAT dynamics exhibits a renewal structure which underlies the diffusive behavior of the corresponding center of mass process.

Definition 1.4. For a sequence of configurations $(U_t)_{t \geq 0}$ with n particles, define the corresponding center of mass process $(\mathscr {M}_t)_{t \geq 0}$ by $\mathscr {M}_t = n^{-1} \sum _{x \in U_t} x$ .

For the following statement, denote by $\mathscr {C} ([0,1])$ the continuous functions $f: [0,1] \to \mathbb {R}^2$ with $f(0)$ equal to the origin $o \in \mathbb {Z}^2$ , equipped with the topology induced by the supremum norm $\sup _{0 \leq t \leq 1} | f(t) |$ .

Theorem 3. If $\mathscr {M}_t$ is linearly interpolated, then the law of the process $\left (t^{-1/2} \mathscr {M}_{st}, \, s \in [0,1]\right )$ , viewed as a measure on $\mathscr {C} ( [0,1] )$ , converges weakly as $t \to \infty $ to two-dimensional Brownian motion on $[0,1]$ with coordinate diffusivity $\chi ^2 = \chi ^2 (n)$ . Moreover, for a universal positive constant c, $\chi ^2$ satisfies

$$\begin{align*}\theta_{6n} (cn)^{-1} \le \chi^2 \leq \theta_{6n} (cn).\end{align*}$$

The same argument allows us to extend the convergence to $[0,\infty )$ . The bounds on $\chi ^2$ are not tight, and we have not attempted to optimize them; they primarily serve to show that $\chi ^2$ is positive and finite.

1.2 Extremal behavior of harmonic measure

As we elaborate in Section 2, the timescale of diameter collapse in Theorem 1 arises from novel estimates of harmonic measure and hitting probabilities, which control the activation and transport dynamics of HAT. Beyond their relevance to HAT, these results further the characterization of the extremal behavior of harmonic measure.

Estimates of harmonic measure often apply only to connected sets or depend on the diameter of the set. The discrete analogues of Beurling’s projection theorem [Reference KestenKes87] and Makarov’s theorem [Reference LawlerLaw93] are notable examples. Furthermore, estimates of hitting probabilities often approximate sets by Euclidean balls which contain them (for example, the estimates in Chapter 2 of [Reference LawlerLaw13]). Such approximations work well for connected sets but not for sets which are ‘sparse’ in the sense that they have large diameters relative to their cardinality; we elaborate this in Section 2.2. For the purpose of controlling the HAT dynamics, which adopts such sparse configurations, existing estimates of harmonic and hitting measures are inapplicable.

To highlight the difference in the behavior of harmonic measure for general (i.e., potentially sparse) and connected sets, consider a finite subset A of $\mathbb {Z}^2$ with $n \geq 2$ elements. We ask: What is the greatest value of ${\mathbb {H}}_A (x)$ ? If we assume no more about A, then ${\mathbb {H}}_A (x)$ can be as large as $\frac 12$ (see Section 2.5 of [Reference LawlerLaw13] for an example). However, if A is connected, then the discrete analogue of Beurling’s projection theorem [Reference KestenKes87] provides a finite constant c such that

$$ \begin{align*} {\mathbb{H}}_A (x) \leq c n^{-1/2}. \end{align*} $$

This upper bound is realized (up to a constant factor) when A is a line segment and x is one of its endpoints.

Our next result provides lower bounds of harmonic measure to complement the preceding upper bounds, addressing the question: What is the least positive value of ${\mathbb {H}}_A (x)$ ?

Theorem 4. There exists a universal positive constant c such that, if A is a subset of $\mathbb {Z}^2$ with $n \geq 1$ elements, then either ${\mathbb {H}}_A (x) = 0$ or

(1.2) $$ \begin{align} {\mathbb{H}}_A (x) \geq e^{- c n \log n}. \end{align} $$

It is much easier to prove that, if A is connected, then equation (1.2) can be replaced by

$$ \begin{align*} {\mathbb{H}}_A (x) \geq e^{-c n}. \end{align*} $$

This lower bound is optimal in terms of its dependence on n, as we can choose A to be a narrow, rectangular ‘tunnel’ with a depth of order n and an element just above the ‘bottom’ of the tunnel, in which case the harmonic measure of this element is exponentially small in n; we will shortly discuss a related example in greater detail. We expect that the bound in equation (1.2) can be improved to an exponential decay with a rate of order n instead of $n \log n$ .

We believe that the best possible lower bound would be realized by the harmonic measure of the innermost element of a square spiral (Figure 3). The virtue of the square spiral is that, essentially, with each additional element, the shortest path to the innermost element lengthens by two steps. This heuristic suggests that the least positive value of harmonic measure should decay no faster than $4^{-2n}$ , as $n \to \infty $ . Indeed, Example 1.6 suggests an asymptotic decay rate of $(2+\sqrt {3})^{-2n}$ . We formalize this observation as a conjecture. To state it, let $\mathscr {H}_n$ be the collection of n-element subsets A of $\mathbb {Z}^2$ such that ${\mathbb {H}}_A (o)> 0$ .

Figure 3 A square spiral. The shortest path $\Gamma $ (red) from $\Gamma _1$ to the origin, which first hits $A_n$ (black and gray dots) at the origin, has a length of approximately $2n$ . Some elements (gray dots) of $A_n$ could be used to continue the spiral pattern (indicated by the black dots) but are presently placed to facilitate a calculation in Example 1.6.

Conjecture 1.5. Asymptotically, the square spiral of Figure 3 realizes the least positive value of harmonic measure, in the sense that

$$ \begin{align*} \lim_{n\to\infty} - \frac1n \log \inf_{A \in \mathscr{H}_n} {\mathbb{H}}_A (o) = 2 \log (2+\sqrt{3}). \end{align*} $$

Example 1.6. Figure 3 depicts the construction of an increasing sequence of sets $(A_1, A_2, \dots )$ such that, for all $n \geq 1$ , $A_n$ is an element of $\mathscr {H}_n$ and the shortest path $\Gamma = (\Gamma _1, \Gamma _2, \dots , \Gamma _k)$ from the exterior boundary of $A_n \cup \partial A_n$ to $\Gamma _k = o$ , which satisfies $\Gamma _i \notin A_n$ for $1 \leq i \leq k - 1$ , has a length of $k = 2(1- o_n (1))n$ . Since $\Gamma _1$ separates the origin from infinity in $A_n^c$ , we have

(1.3) $$ \begin{align} {\mathbb{H}}_{A_n} (o) = {\mathbb{H}}_{A_n \cup \{\Gamma_1\}} (\Gamma_1) \cdot \mathbb{P}_{\, \Gamma_1} \left( S_{\tau_{A_n}} = o \right). \end{align} $$

Concerning the first factor of equation (1.3), one can show that there exist positive constants $b, c < \infty $ such that, for all sufficiently large n,

$$ \begin{align*} c n^{-b} \leq {\mathbb{H}}_{A_n \cup \{\Gamma_1\}} (\Gamma_1) \leq 1. \end{align*} $$

To address the second factor of equation (1.3), we sum over the last time $t < \tau _{A_n}$ that $S_t$ visits $\Gamma _1$ :

$$\begin{align*}\mathbb{P}_{\Gamma_1} \left( S_{\tau_{A_n}} = o \right) = \sum_{t=0}^\infty \mathbb{P}_{\Gamma_1} \left(S_t = \Gamma_1, t< \tau_{A_n}; \{S_{t+1},\dots, S_{\tau_{A_n}}\} \subseteq \Gamma_{2:k} \right), \end{align*}$$

where $\Gamma _{2:k} = \{\Gamma _2,\dots ,\Gamma _k\}$ . The Markov property applied to t implies that

$$\begin{align*}\mathbb{P}_{\Gamma_1} \left(\{S_{t+1},\dots, S_{\tau_{A_n}}\} \subseteq \Gamma_{2:k} \bigm\vert S_t = \Gamma_1, t< \tau_{A_n} \right) = \mathbb{P}_{\Gamma_2} \big( \tau_o < \tau_{\mathbb{Z}^2 \setminus \Gamma_{2:k}} \big). \end{align*}$$

Therefore,

(1.4) $$ \begin{align} \mathbb{P}_{\Gamma_1} \left( S_{\tau_{A_n}} = o \right) = \mathbb{P}_{\Gamma_2} \big( \tau_o < \tau_{\mathbb{Z}^2 \setminus \Gamma_{2:k}} \big) \sum_{t=0}^\infty \mathbb{P}_{\Gamma_1} \left(S_t = \Gamma_1, t< \tau_{A_n} \right). \end{align} $$

Denote the first hitting time of a set $B \subseteq \mathbb {Z}^2$ by $\sigma _B = \inf \{j \geq 0: S_j \in B\}$ , or $\sigma _x$ if $B = \{x\}$ for some $x \in \mathbb {Z}^2$ . The first factor of equation (1.4) equals $\mathbb {P}_{\Gamma _2} ( \sigma _o < \sigma _{\mathbb {Z}^2 \setminus \Gamma _{2:k}} )$ because $\Gamma _2 \notin \{o\} \cup (\mathbb {Z}^2 \setminus \Gamma _{2:k})$ . We calculate it as $f(2)$ , where $f(i) = \mathbb {P}_{\Gamma _i} ( \sigma _o < \sigma _{\mathbb {Z}^2 \setminus \Gamma _{2:k}} )$ solves the system of difference equations

$$ \begin{align*} f(1) = 0,\quad f(k) = 1, \quad \text{and} \quad f(i) = \frac14 f(i+1) + \frac14 f(i-1), \quad 2 \leq i \leq k - 1. \end{align*} $$

The solution of this system yields

(1.5) $$ \begin{align} \frac{1}{(2+\sqrt{3})^{k-1}} \leq f(2) = \frac{2\sqrt{3}}{(2+\sqrt{3})^{k-1} - (2-\sqrt{3})^{k-1}} \leq \frac{1}{(2+\sqrt{3})^{k-2}}. \end{align} $$

Lastly, note that the second factor of equation (1.4) is the expected number of visits that $S_t$ makes to $\Gamma _1$ before time $\tau _A$ , which equals the reciprocal of $\mathbb {P}_{\Gamma _1} (\tau _{\Gamma _1} < \tau _{A_n})$ . Since $\Gamma _1$ is adjacent to $\tau _{A_n}$ , this probability is at least $\frac 14$ , hence

$$\begin{align*}1 \leq \sum_{t=0}^\infty \mathbb{P}_{\Gamma_1} \left(S_t = \Gamma_1, t< \tau_{A_n} \right) \leq 4. \end{align*}$$

Combining the preceding bounds, we conclude that, for all sufficiently large n,

$$ \begin{align*} \cfrac{c n^{-b}}{(2+\sqrt{3})^{k-1}} \leq {\mathbb{H}}_{A_n} (o) \leq \frac{4}{(2+\sqrt{3})^{k-2}}. \end{align*} $$

Substituting $k = 2(1-o_n (1)) n$ and simplifying, we obtain

$$ \begin{align*} (2+\sqrt{3})^{-2(1+o_n(1))n} \leq {\mathbb{H}}_{A_n} (o) \leq (2+\sqrt{3})^{-2(1-o_n(1))n}, \end{align*} $$

which implies

$$ \begin{align*} \lim_{n\to\infty} -\frac1n \log {\mathbb{H}}_{A_n} (o) = 2 \log (2+\sqrt{3}). \end{align*} $$

We conclude the discussion of our main results by stating an estimate of hitting probabilities of the form $\mathbb {P}_x \left ( \tau _{\partial A_d} < \tau _A \right )$ , for $x \in A$ and where $A_d$ is the set of all elements of $\mathbb {Z}^2$ within distance d of A; we will call these escape probabilities from A. Among n-element subsets A of $\mathbb {Z}^2$ , when d is sufficiently large relative to the diameter of A, the greatest escape probability to a distance d from A is at most the reciprocal of $\log d$ , up to a constant factor. We find that, in general, it is at least this much, up to an n-dependent factor.

Theorem 5. There exists a universal positive constant c such that, if A is a finite subset of $\mathbb {Z}^2$ with $n \geq 2$ elements and if $d \geq 2\, \mathrm {diam} (A)$ , then, for any $x \in A$ ,

(1.6) $$ \begin{align} \mathbb{P}_x (\tau_{\partial A_d} < \tau_A) \geq \frac{c {\mathbb{H}}_A (x)}{n \log d}. \end{align} $$

In particular,

(1.7) $$ \begin{align} \max_{x \in A} \mathbb{P}_x \left( \tau_{\partial A_d} < \tau_A \right) \geq \frac{c}{n^2 \log d}. \end{align} $$

In the context of the HAT dynamics, we will use equation (1.7) to control the transport step, ultimately producing the $\log d$ timescale appearing in Theorem 1. In the setting of its application, A and d will, respectively, represent a subset of a HAT configuration and the separation of A from the rest of the configuration. Reflecting the potential sparsity of HAT configurations, d may be arbitrarily large relative to n.

Organization

HAT motivates the development of new estimates of harmonic measure and escape probabilities. We attend to these estimates in Section 3, after we provide a conceptual overview of the proofs of Theorems 1 and 2 in Section 2. To analyze configurations of large diameter, we will decompose them into well separated ‘clusters’, using a construction introduced in Section 5 and used throughout Section 6. The estimates of Section 3 control the activation and transport steps of the dynamics and serve as the critical inputs to Section 6, in which we analyze the ‘collapse’ of HAT configurations. We then identify the class of configurations to which the HAT dynamics can return and prove the existence of a stationary distribution supported on this class; this is the primary focus of Section 7. The final section, Section 8, uses an exponential tail bound on the diameter of configurations under the stationary distribution – a result we obtain at the end of Section 7 – to show that the center of mass process, properly rescaled, converges in distribution to two-dimensional Brownian motion.

Forthcoming notation

We will denote expectation with respect to $\mathbf {P}_U$ , the law of HAT from the configuration U, by $\mathbf {E}_U$ ; the indicator of an event E by $\mathbf {1}_E$ or $\mathbf {1} (E)$ ; the Euclidean disk of radius r about x by $D_x (r) = \{y \in \mathbb {Z}^2: | x - y | < r \}$ , or $D(r)$ if $x = o$ ; its boundary by $C_x (r) = \partial D_x (r)$ , or $C(r)$ if $x = o$ ; the radius of a finite set $A \subset \mathbb {Z}^2$ by $\mathrm {rad} (A) = \sup \{ |x|: x \in A\}$ ; the R-fattening of A by $A_R = \{x \in \mathbb {Z}^2: {\mathrm {dist}}(A,x) \leq R\}$ and the minimum of two random times $\tau _1$ and $\tau _2$ by $\tau _1 \wedge \tau _2$ .

When we refer to a (universal) constant, we will always mean a positive real number. When we cite standard results from [Reference LawlerLaw13] and [Reference PopovPop21], we will write $O(g)$ to denote a function f that uniformly satisfies $|f| \leq c g$ for an implicit constant c. However, in all other instances, f will be nonnegative and we will simply mean the estimate $f \leq c g$ by $O(g)$ . We will include a subscript to indicate the dependence of the implicit constant on a parameter, for example, $f = O_{\! n} (g)$ . We will use $\Omega (g)$ and $\Omega _n (g)$ for the reverse estimate. We will use $o_n (1)$ to denote a nonnegative quantity that is at most $1$ and which tends to $0$ as $n \to \infty $ , for example, $n^{-1}$ for $n \geq 1$ .

2 Conceptual overview

2.1 Estimating the collapse time and proving the existence of the stationary distribution

Before providing precise details, we discuss some of the key steps in the proofs of Theorems 1 and 2. Since the initial configuration U of n particles is arbitrary, it will be advantageous to decompose any such configuration into clusters such that the separation between any two clusters is at least exponentially large relative to their diameters. As we will show later, we can always find such a clustering when the diameter of U is large enough in terms of n. For the purpose of illustration, let us start by assuming that U consists of just two clusters with separation d and hence the individual diameters of the clusters are no greater than $\log d$ (Figure 4).

The first step in our analysis is to show that in time comparable to $\log d,$ the diameter of U will shrink to $\log d$ . This is the phenomenon we call collapse. Theorem 4 implies that every particle with positive harmonic measure has harmonic measure of at least $e^{-c n \log n}$ . In particular, the particle in each cluster with the greatest escape probability from that cluster has at least this harmonic measure. Our choice of clustering will ensure that each cluster has positive harmonic measure. Accordingly, we will treat each cluster as the entire configuration and Theorem 5 will imply that the greatest escape probability from each cluster will be at least $(\log d)^{-1}$ , up to a factor depending upon n.

Together, these results will imply that, in $O_{\! n} (\log d)$ steps, with a probability depending only upon n, all the particles from one of the clusters in Figure 4 will move to the other cluster. Moreover, since the diameter of a cluster grows at most linearly in time, the final configuration will have diameter which is no greater than the diameter of the surviving cluster plus $O_{\! n} (\log d)$ . Essentially, we will iterate this estimate – by clustering anew the surviving cluster of Figure 4 – each time obtaining a cluster with a diameter which is the logarithm of the original diameter, until d becomes smaller than a deterministic function $\theta _{4n}$ , which is approximately the $4n$ th iterated exponential of $cn$ , for a constant c.

Let us denote the corresponding stopping time by $\mathcal {T} (\text {below}\ \theta _{4n}).$ In the setting of the application, there may be multiple clusters and we collapse them one by one, reasoning as above. If any such collapse step fails, we abandon the experiment and repeat it. Of course, with each failure, the set we attempt to collapse may have a diameter which is additively larger by $O_{\! n} (\log d)$ . Ultimately, our estimates allow us to conclude that the attempt to collapse is successful within the first $(\log d)^{1+o_n (1)}$ tries with a high probability.

The preceding discussion roughly implies the following result, uniformly in the initial configuration U:

$$ \begin{align*} \mathbf{P}_U ( \mathcal{T} (\text{below}\ \theta_{4n}) \le (\log d)^{1+o_n (1)} )\ge 1 - e^{- n}. \end{align*} $$

At this stage, we prove that, given any configuration $\widehat U$ and any configuration $\widehat V \in \widehat {\mathrm {N}} \mathrm {onIso} (n)$ , if K is sufficiently large in terms of n and the diameters of $\widehat U$ and $\widehat V$ , then

$$ \begin{align*}\mathbf{P}_{\widehat U} ( \mathcal{T} (\text{hits}\ \widehat V) \leq K^5 ) \geq 1 - e^{-K},\end{align*} $$

where $\mathcal {T} (\text {hits}\ \widehat V)$ is the first time the configuration is $\widehat V$ . To prove this estimate, we split it into two, simpler estimates. Specifically, we show that the particles of $\widehat U$ form a line segment of length n in $K^4$ steps with high probability, and we prove by induction on n that any other nonisolated configuration $\widehat V$ is reachable from the line segment in $K^5$ steps, with high probability. In addition to implying irreducibility of the HAT dynamics on $\widehat {\mathrm {N}} \mathrm {onIso} (n)$ , we use this result to obtain a finite upper bound on the expected return time to any nonisolated configuration (i.e., it proves the positive recurrence of HAT on $\widehat {\mathrm {N}} \mathrm {onIso} (n)$ ). Irreducibility and positive recurrence on $\widehat {\mathrm {N}} \mathrm {onIso} (n)$ imply the existence and uniqueness of the stationary distribution.

Figure 4 Exponentially separated clusters.

Figure 5 Sparse sets like ones which appear in the proofs of Theorems 4 (left) and 5 (right). The elements of A are represented by dark green dots. On the left, $A {\setminus } \{o\}$ is a subset of $D(R)^c$ . On the right, A is a subset of $D(r)$ and $A_R$ , the R-fattening of A (shaded green), is a subset of $D(R+r)$ . The figure is not to scale, as $R \geq e^n$ on the left, while $R \geq e^r$ on the right.

2.2 Improved estimates of hitting probabilities for sparse sets

HAT configurations may include subsets with large diameters relative to the number of elements they contain, and in this sense they are sparse. Two such cases are depicted in Figure 5. A key component of the proofs of Theorems 4 and 5 is a method which improves two standard estimates of hitting probabilities when applied to sparse sets, as summarized by Table 1.

Table 1 Summary of improvements to standard estimates in sparse settings. The origin is denoted by o and $A_R$ denotes the set of all points in $\mathbb {Z}^d$ within a distance R of A.

For the scenario depicted in Figure 5 (left), we estimate the probability that a random walk from $x \in C(\tfrac {R}{3})$ hits the origin before any element of $A{\setminus } \{o\}$ . Since $C(R)$ separates x from $A{\setminus }\{o\}$ , this probability is at least $\mathbb {P}_x (\tau _o < \tau _{C(R)})$ . We can calculate this lower bound by combining the fact that the potential kernel (defined in Section 3) is harmonic away from the origin with the optional stopping theorem (e.g., Proposition 1.6.7 of [Reference LawlerLaw13]):

$$ \begin{align*} \mathbb{P}_x \left( \tau_{o} < \tau_{C(R)} \right) = \frac{\log R - \log |x| + O (R^{-1})}{\log R + O (R^{-1})}. \end{align*} $$

This implies $\mathbb {P}_x (\tau _{o} < \tau _{A \cap D(R)^c}) = \Omega (\tfrac {1}{\log R})$ since $x \in C(\tfrac {R}{3})$ .

We can improve the lower bound to $\Omega (\tfrac {1}{n})$ by using the sparsity of A. We define the random variable $W = \sum _{y \in A{\setminus } \{o\}} \mathbf {1} \left ( \tau _y < \tau _{o} \right )$ and write

$$\begin{align*}\mathbb{P}_x \left (\tau_{o} < \tau_{A {\setminus} \{o\}} \right) = \mathbb{P}_x \left( W = 0 \right) = 1 - \frac{\mathbb{E}_x W}{\mathbb{E}_x [ W \bigm\vert W> 0 ]}.\end{align*}$$

We will show that $\mathbb {E}_x [ W \bigm \vert W> 0 ] \geq \mathbb {E}_x W + \delta $ for some $\delta $ which is uniformly positive in A and n. We will be able to find such a $\delta $ because random walk from x hits a given element of $A{\setminus }\{o\}$ before o with a probability of at most $1/2$ , so conditioning on $\{W>0\}$ effectively increases W by $1/2$ . Then

$$\begin{align*}\mathbb{P}_x \left( \tau_o < \tau_{A {\setminus} \{o\}} \right) \geq 1 - \frac{\mathbb{E}_x W}{\mathbb{E}_x W + \delta} \geq 1 - \frac{n}{n+\delta} = \Omega (\tfrac{1}{n}).\end{align*}$$

The second inequality follows from the monotonicity of $\tfrac {\mathbb {E}_x W}{\mathbb {E}_x W + \delta }$ in $\mathbb {E}_x W$ and the fact that $|A| \leq n$ , so $\mathbb {E}_x W \leq n$ . This is a better lower bound than $\Omega (\tfrac {1}{\log R})$ when R is at least $e^n$ .

A variation of this method also improves a standard estimate for the scenario depicted in Figure 5 (right). In this case, we estimate the probability that a random walk from $x \in C(2r)$ hits $\partial A_R$ before A, where A is contained in $D(r)$ and $A_R$ consists of all elements of $\mathbb {Z}^2$ within a distance $R \geq e^r$ of A. We can bound below this probability by using the fact that

$$\begin{align*}\mathbb{P}_x \left( \tau_{\partial A_R} < \tau_A \right) \geq \mathbb{P}_x ( \tau_{C(R+r)} < \tau_{C(r)} ). \end{align*}$$

A standard calculation with the potential kernel of random walk (e.g., Exercise 1.6.8 of [Reference LawlerLaw13]) shows that this lower bound is $\Omega _n (\tfrac {1}{\log R})$ since $R \geq e^r$ and $r = \Omega (n^{1/2})$ .

We can improve the lower bound to $\Omega _n (\tfrac {\log r}{\log R})$ by using the sparsity of A. We define $W' = \sum _{y \in A} \mathbf {1} \left ( \tau _y < \tau _{\partial A_R} \right )$ and write

$$\begin{align*}\mathbb{P}_x \left( \tau_{\partial A_R} < \tau_A \right) = 1 - \frac{\mathbb{E}_x W'}{\mathbb{E}_x [W' \bigm\vert W'> 0 ]} \geq 1 - \frac{n\alpha}{1 + (n-1)\beta},\end{align*}$$

where $\alpha $ bounds above $\mathbb {P}_x \left ( \tau _y < \tau _{\partial A_R}\right )$ and $\beta $ bounds below $\mathbb {P}_z \left ( \tau _y < \tau _{\partial A_R}\right )$ , uniformly for $x \in C(2r)$ and distinct $y,z \in A$ . We will show that $\alpha \leq \beta $ and $\beta \leq 1 - \tfrac {\log (2r)}{\log R}$ . The former is plausible because $|x-y|$ is at least as great as $|y-z|$ ; the latter because ${\mathrm {dist}} (z,A) \geq R$ while $|y-z| \leq 2r$ , and because of equation (3.9). We apply these facts to the preceding display to conclude

$$\begin{align*}\mathbb{P}_x \left( \tau_{\partial A_R} < \tau_A \right) \geq n^{-1}(1-\beta) = \Omega_n (\tfrac{\log r}{\log R}).\end{align*}$$

This is a better lower bound than $\Omega _n (\tfrac {1}{\log R})$ because r can be as large as $\log R$ .

In summary, by analyzing certain conditional expectations, we can better estimate hitting probabilities for sparse sets than we can by applying standard results. This approach may be useful in obtaining other sparse analogues of hitting probability estimates.

3 Harmonic measure estimates

The purpose of this section is to prove Theorem 4. We will describe the proof strategy in Section 3.1, before stating several estimates in Section 3.2 that streamline the presentation of the proof in Section 3.3.

Consider a subset A of $\mathbb {Z}^2$ with $n \geq 2$ elements, which satisfies ${\mathbb {H}}_A (o)> 0$ (i.e., $A \in \mathscr {H}_n$ ). We frame the proof of equation (1.2) in terms of advancing a random walk from infinity to the origin in three or four stages, while avoiding all other elements of A. We continue to use the phrase ‘random walk from infinity’ to refer to the limiting hitting probability of a fixed, finite subset of $\mathbb {Z}^2$ by a random walk from $x \in \mathbb {Z}^2$ as $|x| \to \infty $ . We will write $\mathbb {P}_\infty $ as a shorthand for this limit.

The stages of advancement are defined in terms of a sequence of annuli which partition $\mathbb {Z}^2$ . Denote by $\mathcal {A} (r,R) = D(R) {\setminus } D(r)$ the annulus with inner radius r and outer radius R. We will frequently need to reference the subset of A which lies within or beyond a disk. We denote $A_{< r} = A \cap D(r)$ and $A_{\geq r} = A \cap D(r)^c$ . Define radii $R_1, R_2, \dots $ and annuli $\mathcal {A}_1, \mathcal {A}_2, \dots $ through $R_1 = 10^5$ , and $R_\ell = R_1^\ell $ and $\mathcal {A}_\ell = \mathcal {A} (R_\ell , R_{\ell +1})$ for $\ell \geq 1$ . We fix $\delta = 10^{-2}$ for use in intermediate scales, like $C(\delta R_{\ell +1}) \subset \mathcal {A}_\ell $ . Additionally, we denote by $n_{0}$ , $n_\ell $ , $m_\ell $ , and $n_{>J}$ the number of elements of A in $D(R_1)$ , $\mathcal {A}_\ell $ , $\mathcal {A}_\ell \cup \mathcal {A}_{\ell +1}$ , and $D(R_{J+1})^c$ , respectively.

We will split the proof of equation (1.2) into an easy case when $n_0 = n$ and a difficult case when $n_0 \neq n$ . If $n_0 \neq n$ , then $A_{\geq R_1}$ is nonempty and the following indices $I = I(A)$ and $J =J(A)$ are well defined:

$$ \begin{align*} I &= \min \{\ell \geq 1:\ \mathcal{A}_\ell\text{ contains an element of}\ A {\setminus} \{o\} \}, \,\,\text{and}\\ J &= \min \{\ell> I: \ \mathcal{A}_\ell\ \text{contains no element of}\ A {\setminus} \{o\}\}. \end{align*} $$

Figure 6 illustrates the definitions of I and J. We explain their roles in the following subsection.

Figure 6 The first annulus that intersects A (green dots) is $\mathcal {A}_I$ ; the next empty annulus is $\mathcal {A}_J$ .

3.1 Strategy for the proof of Theorem 4

This section outlines a proof of equation (1.2) by induction on n. The induction step is easy when $n_0 = n$ because it implies that A is contained in $D(10^5)$ , hence ${\mathbb {H}}_A (o)$ is at least a universal positive constant. The following strategy concerns the difficult case when $n_0 \neq n$ .

Stage 1: Advancing to $C(R_{J})$ . Assume $n_0 \neq n$ and $n \geq 3$ . By the induction hypothesis, there is a universal constant $c_1$ such that the harmonic measure at the origin is at least $e^{-c_1 k \log k}$ , for any set in $\mathscr {H}_k$ , $1 \leq k < n$ . Let $k = n_{>J}+1$ . Because a random walk from $\infty $ which hits the origin before $A_{\geq R_{J}}$ also hits $C(R_{J})$ before A, the induction hypothesis applied to $A_{\geq R_{J}}\cup \{o\} \in \mathscr {H}_{k}$ implies that $\mathbb {P}_\infty (\tau _{C(R_{J})} < \tau _A)$ is no smaller than exponential in $k \log k$ . Note that $k < n$ because $A_{<R_{I+1}}$ has at least two elements by the definition of I.

The reason we advance the random walk to $C(R_{J})$ instead of $C(R_{J-1})$ is that an adversarial choice of A could produce a ‘choke point’ which likely dooms the walk to be intercepted by $A{\setminus } \{o\}$ in the second stage of advancement (Figure 7). To avoid a choke point when advancing to the boundary of a disk D, it suffices for the conditional hitting distribution of $\partial D$ given $\{\tau _{\partial D} < \tau _A\}$ to be comparable to the uniform hitting distribution on $\partial D$ . To prove this comparison, the annular region immediately beyond D and extending to a radius of, say, twice that of D, must be empty of A. This explains the need for exponentially growing radii and for $\mathcal {A}_J$ to be empty of A.

Figure 7 An example of a choke point (left) and a strategy for avoiding it (right). The hitting distribution of a random walk conditioned to reach $\partial D$ before A (green dots) may favor the avoidance of $A \cap D^c$ in a way which localizes the walk (e.g., as indicated by the dark red arc of $\partial D$ ) prohibitively close to $A \cap D$ . The hitting distribution on $C(R_{J})$ will be approximately uniform if the radii grow exponentially. The random walk can then avoid the choke point by ‘tunneling’ through it (e.g., by passing through the tan-shaded region).

Stage 2: Advancing into $\mathcal {A}_{I-1}$ . For notational convenience, assume $I \geq 2$ so that $\mathcal {A}_{I-1}$ is defined; the argument is the same when $I=1$ . Each annulus $\mathcal {A}_\ell $ , $\ell \in \{I,\dots ,J - 1\}$ , contains one or more elements of A, which the random walk must avoid on its journey to $\mathcal {A}_{I-1}$ . Except in an easier subcase, which we address at the end of this subsection, we advance the walk into $\mathcal {A}_{I-1}$ by building an overlapping sequence of rectangular and annular tunnels, through and between each annulus, which are empty of A and through which the walk can enter $\mathcal {A}_{I-1}$ (Figure 8). Specifically, the walk reaches a particular subset $\mathrm {Arc}_{I-1}$ in $\mathcal {A}_{I-1}$ at the conclusion of the tunneling process. We will define $\mathrm {Arc}_{I-1}$ in Lemma 3.3 as an arc of a circle in $\mathcal {A}_{I-1}$ .

Figure 8 Tunneling through nonempty annuli. We construct a contiguous series of sectors (tan) and annuli (blue) which contain no elements of A (green dots) and through which the random walk may advance from $C(R_{J - 1})$ to $C(\delta R_{I - 1})$ (dashed).

By the pigeonhole principle applied to the angular coordinate, for each $\ell \geq I+1$ , there is a sector of aspect ratio $m_\ell = n_\ell + n_{\ell -1}$ , from the lower ‘ $\delta $ th’ of $\mathcal {A}_{\ell }$ to that of $\mathcal {A}_{\ell -1}$ , which contains no element of A (Figure 8). To reach the entrance of the analogous tunnel between $\mathcal {A}_{\ell -1}$ and $\mathcal {A}_{\ell -2}$ , the random walk may need to circle the lower $\delta $ th of $\mathcal {A}_{\ell -1}$ . We apply the pigeonhole principle to the radial coordinate to conclude that there is an annular region contained in the lower $\delta $ th of $\mathcal {A}_{\ell -1}$ , with an aspect ratio of $n_{\ell -1}$ , which contains no element of A.

The probability that the random walk reaches the annular tunnel before exiting the rectangular tunnel from $\mathcal {A}_{\ell }$ to $\mathcal {A}_{\ell -1}$ is no smaller than exponential in $m_\ell $ . Similarly, the random walk reaches the rectangular tunnel from $\mathcal {A}_{\ell -1}$ to $\mathcal {A}_{\ell -2}$ before exiting the annular tunnel in $\mathcal {A}_{\ell -1}$ with a probability no smaller than exponential in $n_{\ell -1}$ . Overall, we conclude that the random walk reaches $\mathrm {Arc}_{I-1}$ without leaving the union of tunnels – and therefore without hitting an element of A – with a probability no smaller than exponential in $\sum _{\ell = I}^{J - 1} n_\ell $ .

Stage 3: Advancing to $C(R_1)$ . Figure 5 (left) essentially depicts the setting of the random walk upon reaching $x \in \mathrm {Arc}_{I-1}$ , except with $C(R_I)$ in the place of $C(R)$ and the circle containing $\mathrm {Arc}_{I-1}$ in the place of $C(\frac {R}{3})$ and except for the possibility that $D(R_1)$ contains other elements of A. Nevertheless, if the radius of $\mathrm {Arc}_{I-1}$ is at least $e^n$ , then by pretending that $A_{<R_1} = \{o\}$ , the method highlighted in Section 2.2 will show that $\mathbb {P}_x (\tau _{C(R_1)} < \tau _{A}) = \Omega (\frac 1n)$ . A simple calculation will give the same lower bound (for a potentially smaller constant) in the case when the radius is less than $e^n$ .

Stage 4: Advancing to the origin. Once the random walk reaches $C(R_1)$ , we simply dictate a path for it to follow. There can be no more than $O(R_1^2)$ elements of $A_{<R_1}$ , so there is a path of length $O(R_1^2)$ to the origin which avoids all other elements of A, and a corresponding probability of at least a constant that the random walk follows it.

Conclusion of Stages 1–4. The lower bounds from the four stages imply that there are universal constants $c_1$ through $c_4$ such that

$$ \begin{align*} {\mathbb{H}}_A (o) \geq e^{-c_1 k \log k - c_2 \sum_{\ell=I}^{J-1} n_\ell - \log (c_3 n) - \log c_4} \geq e^{-c_1 n \log n}.\end{align*} $$

It is easy to show that the second inequality holds if $c_1 \geq 8\max \{1, c_2, \log c_3,\log c_4\}$ , using the fact that $n - k = \sum _{\ell =I}^{J-1} n_\ell> 1$ and $\log n \geq 1$ . We are free to adjust $c_1$ to satisfy this bound because $c_2$ through $c_4$ do not depend on the induction hypothesis. This concludes the induction step.

A complication in Stage 2. If $R_{\ell }$ is not sufficiently large relative to $m_\ell $ , then we cannot tunnel the random walk through $\mathcal {A}_{\ell }$ into $\mathcal {A}_{\ell -1}$ . We formalize this through the failure of the condition

(3.1) $$ \begin{align} \delta R_{\ell}> R_1 (m_\ell+1). \end{align} $$

The problem is that, if equation (3.1) fails, then there are too many elements of A in $\mathcal {A}_{\ell }$ and $\mathcal {A}_{\ell -1}$ , and we cannot guarantee that there is a tunnel between the annuli which avoids A.

Accordingly, we will stop Stage 2 tunneling once the random walk reaches a particular subset $\mathrm {Arc}_{K-1}$ of a circle in $\mathcal {A}_{K-1}$ , where $\mathcal {A}_{K-1}$ is the outermost annulus which fails to satisfy equation (3.1). Specifically, we define K as:

(3.2) $$ \begin{align} K = \begin{cases} I, & \text{if equation (3.1)}\\ & \quad \text{holds for}\ \ell \in \{I,\dots,J\};\\\min \{k \in \{I,\dots,J\}: \text{equation (3.1) holds for}\ \ell \in \{k,\dots,J\} \}, & \text{otherwise.} \end{cases} \end{align} $$

Informally, when $K = I$ , the pigeonhole principle yields wide enough tunnels for the random walk to advance all the way to $\mathcal {A}_{I-1}$ in Stage 2. When $K \neq I$ , the tunnels become too narrow, so we must halt tunneling before the random walk reaches $\mathcal {A}_{I-1}$ . However, this case is even simpler, as the failure of equation (3.1) for $\ell = K-1$ implies that there is a path of length $O(\sum _{\ell =I}^{K-1} n_\ell )$ from $\mathrm {Arc}_{K-1}$ to the origin which otherwise avoids A. In this case, instead of proceeding to Stages 3 and 4 as described above, the third and final stage consists of random walk from $\mathrm {Arc}_{K-1}$ following this path to the origin with a probability no smaller than exponential in $\sum _{\ell =I}^{K-1} n_\ell $ .

Overall, if $K \neq I$ , then Stages 2 and 3 contribute a rate of $\sum _{\ell =I}^{J-1} n_\ell $ . This rate is smaller than the one contributed by Stages 2–4 when $K = I$ , so the preceding conclusion holds.

3.2 Preparation for the proof of Theorem 4

3.2.1 Input to Stage 1

Let $A \in \mathscr {H}_n$ . Like in Section 3.1, we assume that $n_0 \neq n$ (i.e., $A_{\geq R_1} \neq \emptyset $ ) and defer the simpler complementary case to Section 3.3. Recall that the radii $R_i$ must grow exponentially so that the conditional hitting distribution of $C(R_J)$ is comparable to the uniform distribution, thus avoiding potential choke points (Figure 7). The next two results accomplish this comparison. We state them in terms of the uniform distribution on $C(r)$ , which we denote by $\mu _{r}$ .

Lemma 3.1. Let $\varepsilon> 0$ , and denote $\eta = \tau _{C(R)} \wedge \tau _{C (\varepsilon ^2 R)}$ . There is a constant c such that, if $\varepsilon \leq \tfrac {1}{100}$ and $R \geq 10\varepsilon ^{-2}$ and if

(3.3) $$ \begin{align} \min_{x \in C(\varepsilon R)} \mathbb{P}_x \left( \tau_{C(\varepsilon^2 R)} < \tau_{C(R)} \right)> \frac{1}{10},\end{align} $$

then, uniformly for $x \in C( \varepsilon R)$ and $y \in C (\varepsilon ^2 R)$ ,

$$\begin{align*}\mathbb{P}_x \left( S_\eta = y, \tau_{C(\varepsilon^2 R)} < \tau_{C(R)}\right) \geq c \mu_{\varepsilon^2 R} (y) \, \mathbb{P}_x \left(\tau_{C(\varepsilon^2 R)} < \tau_{C (R)} \right).\end{align*}$$

The proof, which is similar to that of Lemma 2.1 in [Reference Dembo, Peres, Rosen and ZeitouniDPRZ06], approximates the hitting distribution of $C(\varepsilon ^2 R)$ by the corresponding harmonic measure, which is comparable to the uniform distribution. The condition (3.3), and the assumptions on $\varepsilon $ and R, are used to control the error of this approximation. We defer the proof to Section A.3, along with the proof of the following application of Lemma 3.1.

Lemma 3.2. There is a constant c such that, for every $z \in C(R_{J})$ ,

(3.4) $$ \begin{align} \mathbb{P}_\infty \big( S_{\tau_{C(R_{J})}} = z \bigm\vert \tau_{C(R_{J})} < \tau_A \big) \geq c \mu_{R_J} (z). \end{align} $$

Under the conditioning in equation (3.4), the random walk reaches $C(\delta R_{J+1})$ before hitting A. A short calculation shows that it typically proceeds to hit $C(R_{J})$ before returning to $C(R_{J+1})$ (i.e., it satisfies equation (3.3) with $R_{J+1}$ in the place of R and $\varepsilon ^2 = 10^{-5}$ ). The inequality (3.4) then follows from Lemma 3.1.

3.2.2 Inputs to Stage 2

We continue to assume that $n_0 \neq n$ so that I, J and K are well defined; the $n_0 = n$ case is easy and we address it in Section 3.3. In this subsection, we will prove an estimate of the probability that a random walk passes through annuli $\mathcal {A}_{J-1}$ to $\mathcal {A}_K$ without hitting A. First, in Lemma 3.3, we will identify a sequence of ‘tunnels’ through the annuli, which are empty of A. Second, in Lemma 3.4 and Lemma 3.5, we will show that random walk traverses these tunnels through a series of rectangles, with a probability that is no smaller than exponential in the number of elements in $\mathcal {A}_K, \dots , \mathcal {A}_{J-1}$ . We will combine these estimates in Lemma 3.6.

For each $\ell \in \mathbb {I} = \{K,\dots ,J\}$ , we define the annulus $\mathcal {B}_\ell = \mathcal {A} (R_{\ell - 1}, \delta R_{\ell +1})$ . The radial and angular tunnels from $\mathcal {A}_{\ell }$ into and around $\mathcal {A}_{\ell -1}$ will be subsets of $\mathcal {B}_\ell $ . The inner radius of $\mathcal {B}_\ell $ is at least $R_1$ because

$$ \begin{align*} \ell \in \mathbb{I} \implies R_\ell> \delta^{-1} R_1 (m_\ell + 1) \geq 10^7 \implies \ell \geq 2. \end{align*} $$

The first implication is due to equations (3.1) and (3.2); the second is due to the fact that $R_\ell = 10^{5\ell }$ .

The following lemma identifies subsets of $\mathcal {B}_\ell $ which are empty of A (Figure 9). Recall that $m_\ell = n_\ell + n_{\ell -1}$ .

Figure 9 The regions identified in Lemma 3.3. The tan sectors and dark blue annuli are subsets of the overlapping annuli $\mathcal {B}_\ell $ and $\mathcal {B}_{\ell -1}$ that are empty of A.

Lemma 3.3. Let $\ell \in \mathbb {I}$ . Denote $\varepsilon _\ell = (m_\ell +1)^{-1}$ and $\delta ' = \delta /10$ . For every $\ell \in \mathbb {I}$ , there is an angle $\vartheta _\ell \in [0,2\pi )$ and a radius $a_{\ell -1} \in [10 R_{\ell -1}, \delta ' R_{\ell })$ such that the following regions contain no element of A:

  • the sector of $\mathcal {B}_\ell $ subtending the angular interval $\left [\vartheta _\ell , \vartheta _\ell + 2\pi \varepsilon _\ell \right )$ , hence the ‘middle third’ subsector

    $$\begin{align*}\mathrm{Sec}_\ell = \left[ R_{\ell}, \delta' R_{\ell+1} \right) \times \left[ \vartheta_\ell + \tfrac{2\pi}{3} \varepsilon_\ell, \, \vartheta_\ell + \tfrac{4\pi}{3} \varepsilon_\ell \right);\,\,\,\text{and}\end{align*}$$
  • the subannulus $\mathrm {Ann}_{\ell -1} = \mathcal {A} (a_{\ell -1}, b_{\ell -1})$ of $\mathcal {B}_\ell $ , where we define

    $$ \begin{align*} b_{\ell-1} = a_{\ell-1} + \Delta_{\ell-1} \quad \text{for} \quad \Delta_{\ell-1} = \delta' \varepsilon_{\ell} R_{\ell} \end{align*} $$
    and, in particular, the circle $\mathrm {Circ}_{\ell -1} = C\big ( \tfrac {a_{\ell -1} + b_{\ell -1}}{2} \big )$ and the ‘arc’
    $$\begin{align*}\mathrm{Arc}_{\ell-1} = \mathrm{Circ}_{\ell-1} \cap \left\{x \in \mathbb{Z}^2: \arg x \in \left[\vartheta_\ell, \vartheta_\ell + 2\pi \varepsilon_\ell \right) \right\}.\end{align*}$$

We take a moment to explain the parameters and regions. Aside from $\mathcal {B}_\ell $ , which overlaps $\mathcal {A}_\ell $ and $\mathcal {A}_{\ell -1}$ , the subscripts of the regions indicate which annulus contains them (e.g., $\mathrm {Sec}_\ell \subset \mathcal {A}_\ell $ and $\mathrm {Ann}_{\ell -1} \subset \mathcal {A}_{\ell -1}$ ). The proof uses the pigeonhole principle to identify regions which contain none of the $m_\ell $ elements of A in $\mathcal {B}_\ell $ and $\mathrm {Ann}_{\ell -1}$ ; this motivates our choice of $\varepsilon _\ell $ . A key aspect of $\mathrm {Sec}_\ell $ is that it is separated from $\partial \mathcal {B}_\ell $ by a distance of at least $R_{\ell -1}$ , which will allow us to position one end of a rectangular tunnel of width $R_{\ell -1}$ in $\mathrm {Sec}_\ell $ without the tunnel exiting $\mathcal {B}_\ell $ . We also need the inner radius of $\mathrm {Ann}_{\ell -1}$ to be at least $R_{\ell -1}$ greater than that of $\mathcal {B}_\ell $ , hence the lower bound on $a_{\ell -1}$ . The other key aspect of $\mathrm {Ann}_{\ell -1}$ is its overlap with $\mathrm {Sec}_{\ell -1}$ . The specific constants (e.g., $\tfrac {2\pi }{3}$ , $10$ , and $\delta '$ ) are otherwise unimportant.

Proof of Lemma 3.3.

Fix $\ell \in \mathbb {I}$ . For $j \in \{0,\dots , m_\ell \}$ , form the intervals

$$\begin{align*}2\pi \varepsilon_\ell \left[j, j+1 \right) \,\,\, \text{and} \,\,\, 10 R_{\ell - 1} + \Delta_{\ell-1} [j,j+1).\end{align*}$$

$\mathcal {B}_\ell $ contains at most $m_\ell $ elements of A, so the pigeonhole principle implies that there are $j_1$ and $j_2$ in this range and such that, if $\vartheta _\ell = j_1 2 \pi \varepsilon _\ell $ and if $a_{\ell -1} = 10 R_{\ell -1} + j_2 \Delta _{\ell -1}$ , then

$$\begin{align*}\mathcal{B}_\ell \cap \left\{x \in \mathbb{Z}^2: \arg x \in \big[\vartheta_\ell, \vartheta_\ell + 2 \pi \varepsilon_\ell \big) \right\} \cap A = \emptyset, \quad \text{and} \quad \mathcal{A} (a_{\ell-1}, a_{\ell-1} + \Delta_{\ell-1} ) \cap A = \emptyset. \end{align*}$$

Because $\mathcal {B}_\ell \supseteq \mathrm {Sec}_\ell $ and $\mathrm {Ann}_{\ell -1} \supseteq \mathrm {Arc}_{\ell -1} $ , for these choices of $\vartheta _\ell $ and $a_{\ell -1}$ , we also have $\mathrm {Sec}_\ell \cap A = \emptyset $ and $\mathrm {Arc}_{\ell -1} \cap A = \emptyset $ .

The next result bounds below the probability that the random walk tunnels ‘down’ from $\mathrm {Sec}_\ell $ to $\mathrm {Arc}_{\ell -1}$ . We state it without proof, as it is a simple consequence of the known fact that a random walk from the ‘bulk’ of a rectangle exits its far, small side with a probability which is no smaller than exponential in the aspect ratio of the rectangle (Lemma A.4). In this case, the aspect ratio is $O(m_\ell )$ .

Lemma 3.4. There is a constant c such that, for any $\ell \in \mathbb {I}$ and every $y \in \mathrm {Sec}_\ell $ ,

$$ \begin{align*} \mathbb{P}_y \big( \tau_{\mathrm{Arc}_{\ell-1}} < \tau_A \big) \geq c^{m_\ell}. \end{align*} $$

The following lemma bounds below the probability that the random walk tunnels ‘around’ $\mathrm {Ann}_{\ell -1}$ , from $\mathrm {Arc}_{\ell -1}$ to $\mathrm {Sec}_{\ell -1}$ . (This result applies to $\ell \in \mathbb {I} \setminus \{K\} = \{K+1, \dots , J\}$ because Lemma 3.3 defines $\mathrm {Sec}_\ell $ for $\ell \in \mathbb {I}$ .) Like Lemma 3.4, we state it without proof because it is a simple consequence of Lemma A.4. Indeed, random walk from $\mathrm {Arc}_{\ell -1}$ can reach $\mathrm {Sec}_{\ell -1}$ without exiting $\mathrm {Ann}_{\ell -1}$ by appropriately exiting each rectangle in a sequence of $O(m_\ell )$ rectangles of aspect ratio $O(1)$ . Applying Lemma A.4 then implies equation (3.5).

Lemma 3.5. There is a constant c such that, for any $\ell \in \mathbb {I} \setminus \{K\}$ and every $z \in \mathrm {Arc}_{\ell -1}$ ,

(3.5) $$ \begin{align} \mathbb{P}_z \big( \tau_{\mathrm{Sec}_{\ell -1}} < \tau_A \big) \geq c^{m_\ell}. \end{align} $$

The next result combines Lemma 3.4 and Lemma 3.5 to tunnel from $\mathcal {A}_J$ into $\mathcal {A}_{K-1}$ . Because the random walk tunnels from $\mathcal {A}_\ell $ to $\mathcal {A}_{\ell -1}$ with a probability no smaller than exponential in $m_\ell = n_\ell + n_{\ell -1}$ , the bound in equation (3.6) is no smaller than exponential in $\sum _{\ell =K-1}^{J-1} n_\ell $ (recall that $n_J = 0$ ).

Lemma 3.6. There is a constant c such that

(3.6) $$ \begin{align} \mathbb{P}_{\, \mu_{R_J}} \left( \tau_{\mathrm{Arc}_{K-1}} < \tau_A \right) \geq c^{\sum_{\ell = K-1}^{J - 1} n_\ell}. \end{align} $$

Proof. Denote by G the event

$$\begin{align*}\big\{ \tau_{\mathrm{Arc}_{J-1}} < \tau_{\mathrm{Sec}_{J-1}} < \tau_{\mathrm{Arc}_{J-2}} < \tau_{\mathrm{Sec}_{J-2}} < \cdots < \tau_{\mathrm{Sec}_{K}} < \tau_{\mathrm{Arc}_{K-1}} < \tau_A \big\}.\end{align*}$$

Lemma 3.4 and Lemma 3.5 imply that there is a constant $c_1$ such that

(3.7) $$ \begin{align} \mathbb{P}_z (G) \geq c_1^{\sum_{\ell=K-1}^{J-1} n_\ell} \,\,\,\text{for}\ z \in C(R_{J}) \cap \mathrm{Sec}_{J}. \end{align} $$

The intersection of $\mathrm {Sec}_{J}$ and $C(R_{J})$ subtends an angle of at least $n_{J-1}^{-1}$ , so there is a constant $c_2$ such that

(3.8) $$ \begin{align} \mu_{R_J} (\mathrm{Sec}_{J}) \geq c_2 n_{J-1}^{-1}. \end{align} $$

The inequality (3.6) follows from $G \subseteq \{\tau _{\mathrm {Arc}_{K-1}} < \tau _A \}$ , and equations (3.7) and (3.8):

$$\begin{align*}\mathbb{P}_{\mu_{R_J}} \left( \tau_{\mathrm{Arc}_{K-1}} < \tau_A \right) \geq \mathbb{P}_{\mu_{R_J}} (G) \geq c_2 n_{J-1}^{-1} \cdot c_1^{\sum_{\ell=K-1}^{J-1} n_\ell} \geq c_3^{\sum_{\ell=K-1}^{J-1} n_\ell}. \end{align*}$$

For the third inequality, we take $c_3 = (c_1 c_2)^2$ .

3.2.3 Inputs to Stage 3 when $K=I$

We continue to assume that $n_0 \neq n$ , as the alternative case is addressed in Section 3.3. Additionally, we assume $K = I$ . We briefly recall some important context. When $K=I$ , at the end of Stage 2, the random walk has reached $\mathrm {Circ}_{I-1} \subseteq \mathcal {A}_{I-1}$ , where $\mathrm {Circ}_{I-1}$ is a circle with a radius in $[R_{I-1}, \delta ' R_I)$ . (Note that $I>1$ when $K=I$ because I must then satisfy equation (3.1), so the radius of $\mathrm {Circ}_{I-1}$ is at least $R_1$ .) Since $\mathcal {A}_I$ is the innermost annulus which contains an element of A, the random walk from $\mathrm {Arc}_{I-1}$ must simply reach the origin before hitting $A_{> R_I}$ . In this subsection, we estimate this probability.

We will use the potential kernel associated with random walk on $\mathbb {Z}^2$ . We denote it by $\mathfrak {a}$ . It equals zero at the origin, is harmonic on $\mathbb {Z}^2 {\setminus } \{o\}$ and satisfies

(3.9) $$ \begin{align} \left| \mathfrak{a}(x) - \frac{2}{\pi}\log {|x|} - \kappa \right| \leq \lambda |x|^{-2}, \end{align} $$

where $\kappa \in (1.02,1.03)$ is an explicit constant and $\lambda $ is less than $0.06882$ [Reference Kozma and SchreiberKS04]. In some instances, we will want to apply $\mathfrak {a}$ to an element which belongs to $C(r)$ . It will be convenient to denote, for $r> 0$ ,

$$ \begin{align*} \mathfrak{a}' (r) = \frac{2}{\pi} \log r + \kappa. \end{align*} $$

We will need the following standard hitting probability estimate (see, for example, Proposition 1.6.7 of [Reference LawlerLaw13]), which we state as a lemma because we will use it in other sections as well.

Lemma 3.7. Let $y \in D_x (r)$ for $r \geq 2 (|x| + 1)$ , and assume $y \neq o$ . Then

(3.10) $$ \begin{align} \mathbb{P}_y \left( \tau_o < \tau_{C_x (r)} \right) = \frac{\mathfrak{a}' (r) - \mathfrak{a} (y) + O \left( \frac{| x | + 1}{r} \right)}{ \mathfrak{a}' (r) + O \left( \frac{|x| + 1}{r} \right)}.\end{align} $$

The implicit constants in the error terms are less than one.

If $R_{I} < e^{8n}$ , then no further machinery is needed to prove the Stage 3 estimate.

Lemma 3.8. There exists a constant c such that, if $R_{I} < e^{8n}$ , then

$$ \begin{align*} \mathbb{P}_\infty \left( \tau_{C(R_1)} < \tau_{A} \bigm\vert \tau_{\mathrm{Circ}_{I-1}} < \tau_A \right) \geq \frac{c}{n}. \end{align*} $$

The bound holds because the random walk must exit $D(R_I)$ to hit $A_{\geq R_I}$ . By a standard hitting estimate, the probability that the random walk hits the origin first is inversely proportional to $\log R_I$ which is $O(n)$ when $R_I < e^{8n}$ .

Proof of Lemma 3.8.

Uniformly for $y \in \mathrm {Circ}_{I-1}$ , we have

(3.11) $$ \begin{align} \mathbb{P}_y \left( \tau_{C(R_1)} < \tau_A \right) \geq \mathbb{P}_y \left( \tau_{o} < \tau_{C(R_{I})} \right) \geq \frac{\mathfrak{a}' (R_{I}) - \mathfrak{a}' (\delta R_{I-1}) - \tfrac{1}{R_{I}} - \tfrac{1}{\delta R_{I}}}{\mathfrak{a}' (R_{I}) + \tfrac{1}{R_{I}}} \geq \frac{1}{\mathfrak{a}' (R_{I})}. \end{align} $$

The first inequality follows from the observation that $C(R_1)$ and $C(R_{I})$ separate y from o and A. The second inequality is due to Lemma 3.7, where we have replaced $\mathfrak {a} (y)$ by $\mathfrak {a}' (\delta R_{I}) + \tfrac {1}{\delta R_{I}}$ using Lemma A.2 and the fact that $|y| \leq \delta R_{I}$ . The third inequality follows from $\delta R_{I} \geq 10^3$ . To conclude, we substitute $\mathfrak {a}' (R_{I}) = \tfrac {2}{\pi } \log R_{I} + \kappa $ into equation (3.11) and use the assumption that $R_{I} < e^{8n}$ .

We will use the rest of this subsection to prove the bound of Lemma 3.8, but under the complementary assumption $R_{I} \geq e^{8n}$ . This is one of the two estimates we highlighted in Section 2.2.

Next is a standard result, which enables us to express certain hitting probabilities in terms of the potential kernel. We include a short proof for completeness.

Lemma 3.9. For any pair of points $x, y \in \mathbb {Z}^2$ , define

$$ \begin{align*} M_{x,y}(z) =\frac{ \mathfrak{a} (x-z)-\mathfrak{a}(y-z)}{2\mathfrak{a}(x-y)}+\frac{1}{2}. \end{align*} $$

Then $M_{x,y} (z) = \mathbb {P}_z (\sigma _y < \sigma _x)$ .

Proof. Fix $x, y \in \mathbb {Z}^2$ . Theorem 1.4.8 of [Reference LawlerLaw13] states that for any proper subset B of $\mathbb {Z}^2$ (including infinite B) and bounded function $F: \partial B \to \mathbb {R}$ , the unique bounded function $f: B \cup \partial B \to \mathbb {R}$ which is harmonic in B and equals F on $\partial B$ is $f(z) = \mathbb {E}_z [ F (S_{\sigma _{\partial B}})] $ . Setting $B = \mathbb {Z}^2 {\setminus } \{x,y\}$ and $F(z) = \mathbf {1} (z = y)$ , we have $f(z) = \mathbb {P}_z (\sigma _y < \sigma _x)$ . Since $M_{x,y}$ is bounded, harmonic on B and agrees with f on $\partial B$ , the uniqueness of f implies $M_{x,y} (z) = f(z)$ .

The next two results partly implement the first estimate that we discussed in Section 2.2.

Lemma 3.10. For any $z,z' \in \mathrm {Circ}_{I-1}$ and $y \in D(R_{I})^c$ ,

(3.12) $$ \begin{align} \mathbb{P}_z (\tau_{y}<\tau_{o}) \le \frac{1}{2} \quad\text{and} \quad \left| \mathbb{P}_z (\tau_{y}<\tau_{o})-\mathbb{P}_{z'}(\tau_{y}<\tau_{o}) \right| \leq \frac{2}{\log R_{I}}.\end{align} $$

The first inequality in equation (3.12) holds because z is appreciably closer to the origin than it is to y. The second inequality holds because a Taylor expansion of the numerator of $M_{o,y}(z) - M_{o,y} (z')$ shows that it is $O(1)$ , while the denominator of $2\mathfrak {a} (y)$ is at least $\log R_{I}$ .

Proof of Lemma 3.10.

By Lemma 3.9,

$$\begin{align*}\mathbb{P}_z (\tau_y < \tau_{o}) = \frac12 + \frac{\mathfrak{a} (z) - \mathfrak{a} (y - z)}{2 \mathfrak{a} (y)}.\end{align*}$$

The first inequality in equation (3.12) then follows from $\mathfrak {a} (y-z) \geq \mathfrak {a} (z)$ , which is due to (1) of Lemma A.1. This fact applies because $|y-z| \geq 2 |z| \geq 4$ . The first of these bounds holds because $|z| \leq \delta R_{I} + 1$ and $|y-z| \geq (1-\delta ) R_{I} -1$ since $\mathrm {Circ}_{I-1} \subseteq D(\delta R_{I})$ ; the second holds because the radius of $\mathrm {Circ}_{I-1}$ is at least $R_1$ since $I>1$ when $K=I$ .

Using Lemma 3.9, the difference in equation (3.12) can be written as

(3.13) $$ \begin{align} \left| M_{o,y} (z) - M_{o,y} (z') \right| \leq \frac{| \mathfrak{a}(z) - \mathfrak{a} (z') |}{2 \mathfrak{a} (y)} + \frac{|\mathfrak{a} (y-z) - \mathfrak{a} (y-z')|}{2\mathfrak{a} (y)}. \end{align} $$

By Lemma A.2, in terms of $r = \mathrm {rad} (\mathrm {Circ}_{I-1})$ , $\mathfrak {a} (z)$ and $\mathfrak {a} (z')$ differ from $\mathfrak {a}' (r)$ by no more than $r^{-1}$ . Since $r \geq R_1$ , this implies $| \mathfrak {a} (z) - \mathfrak {a} (z') | \leq 2R_1^{-1}$ . Concerning the denominator, $\mathfrak {a} (y)$ is at least $\tfrac {2}{\pi } \log |y| \geq \tfrac {2}{\pi } \log R_{I}$ by (2) of Lemma A.1 because $|y|$ is at least $1$ . We apply (3) of Lemma A.1 with $R = R_{I}$ and $r = \mathrm {rad} (\mathrm {Circ}_{I-1}) \leq \delta R_{I}$ to bound the numerator by $\tfrac {4}{\pi }$ . Substituting these bounds into equation (3.13) gives

$$\begin{align*}\left| \mathbb{P}_z (\tau_{y}<\tau_{o})-\mathbb{P}_{z'}(\tau_{y}<\tau_{o}) \right| \leq \frac{1}{\frac{2}{\pi} R_1 \log R_I} + \frac{1}{\log R_I} \leq \frac{2}{\log R_I}. \end{align*}$$

Label the k elements in $A_{\geq R_1}$ by $x_i$ for $1 \leq i \leq k$ . Then let $Y_i = \mathbf {1} (\tau _{x_i} < \tau _o)$ and $W = \sum _{i=1}^{k} Y_i$ . In words, W counts the number of elements of $A_{\geq R_1}$ which have been visited before the random walk returns to the origin.

Lemma 3.11. If $R_{I} \geq e^{8n}$ , then, for all $z \in \mathrm {Circ}_{I-1}$ ,

(3.14) $$ \begin{align} \mathbb{E}_z [W\mid W>0]\ge \mathbb{E}_z W+\frac{1}{4}. \end{align} $$

The constant $\tfrac 14$ in equation (3.14) is unimportant, aside from being positive, independently of n. The inequality holds because random walk from $\mathrm {Circ}_{I-1}$ hits a given element of $A_{\geq R_1}$ before the origin with a probability of at most $\frac 12$ . Consequently, given that some such element is hit, the conditional expectation of W is essentially larger than its unconditional one by a constant.

Proof of Lemma 3.11.

Fix $z \in \mathrm {Circ}_{I-1}$ . When $\{W>0\}$ occurs, some labeled element, $x_f$ , is hit first. After $\tau _{x_f}$ , the random walk may proceed to hit other $x_i$ before returning to $\mathrm {Circ}_{I-1}$ at a time $\eta = \min \left \{ t \geq \tau _{x_f}: S_t \in \mathrm {Circ}_{I-1} \right \}.$ Let $\mathcal {V}$ be the collection of labeled elements that the walk visits before time $\eta $ , $\{i: \tau _{x_i} < \eta \}$ . In terms of $\mathcal {V}$ and $\eta $ , the conditional expectation of W is

(3.15) $$ \begin{align} \mathbb{E}_z [W\mid W>0]=\mathbb{E}_z \Big[ |\mathcal{V} | + \mathbb{E}_{S_\eta} \sum_{i\notin \mathcal{V}} Y_i \Bigm\vert W > 0\Big]. \end{align} $$

Let V be a nonempty subset of the labeled elements, and let $z' \in \mathrm {Circ}_{I-1}$ . We have

$$\begin{align*}\Big| \,\mathbb{E}_z \sum_{i \notin V} Y_i - \mathbb{E}_{z'} \sum_{i \notin V} Y_i \,\Big| \leq \frac{2n}{\log R_{I}} \leq \frac14.\end{align*}$$

The first inequality is due to Lemma 3.10 and the fact that there are at most n labeled elements outside of V. The second inequality follows from the assumption that $R_{I} \geq e^{8n}$ .

We use this bound to replace $S_\eta $ in equation (3.15) with z:

(3.16) $$ \begin{align} \mathbb{E}_z [W \bigm\vert W> 0] \geq \mathbb{E}_z \Big[ |\mathcal{V}| + \mathbb{E}_z \sum_{i\notin \mathcal{V}} Y_i \Bigm\vert W > 0 \Big] - \frac14. \end{align} $$

By Lemma 3.10, $\mathbb {P}_z (\tau _{x_i} < \tau _{o}) \leq \tfrac 12$ . Accordingly, for a nonempty subset V of labeled elements,

$$\begin{align*}\mathbb{E}_z \sum_{i \notin V} Y_i \geq \mathbb{E}_z W - \frac12 |V|.\end{align*}$$

Substituting this into the inner expectation of equation (3.16), we find

$$ \begin{align*} \mathbb{E}_z [W \bigm\vert W> 0] &\geq \mathbb{E}_z \Big[ |\mathcal{V}| + \mathbb{E}_z W - \frac12 |\mathcal{V}| \Bigm\vert W > 0 \Big] - \frac14\\ & \geq \mathbb{E}_z W + \mathbb{E}_z \left[ \frac12 |\mathcal{V}| \Bigm\vert W > 0 \right] - \frac14. \end{align*} $$

Since $\{W> 0\} = \{ |\mathcal {V}| \geq 1\}$ , this lower bound is at least $\mathbb {E}_z W + \frac 14$ .

We use the preceding lemma to prove the analogue of Lemma 3.8 when $R_{I} \geq e^{8n}$ . The proof uses the method highlighted in Section 2.2 and Figure 5 (left).

Lemma 3.12. There exists a constant c such that, if $R_{I} \geq e^{8n}$ , then

(3.17) $$ \begin{align} \mathbb{P}_\infty \left( \tau_{C(R_1)} < \tau_A \bigm\vert \tau_{\mathrm{Circ}_{I-1}} < \tau_A \right) \geq \frac{c}{n}. \end{align} $$

Proof. Conditionally on $\{\tau _{\mathrm {Circ}_{I-1}} < \tau _A\}$ , let the random walk hit $\mathrm {Circ}_{I-1}$ at z. Denote the positions of the $k \leq n$ particles in $A_{\geq R_1}$ as $x_i$ for $1 \leq i \leq k$ . Let $Y_{i}=\mathbf {1} (\tau _{x_i} < \tau _{o})$ and $W=\sum _{i = 1}^{k} Y_i$ , just as we did for Lemma 3.11. The claimed bound (3.17) follows from

$$\begin{align*}\mathbb{P}_z (\tau_A < \tau_{C(R_1)}) \leq \mathbb{P}_z (W>0)=\frac{\mathbb{E}_z W}{\mathbb{E}_z [W\mid W>0]} \leq \frac{\mathbb{E}_z W}{\mathbb{E}_z W + 1/4} \leq \frac{n}{n+1/4} \leq 1 - \frac{1}{5n}.\end{align*}$$

The first inequality follows from the fact that $C(R_1)$ separates z from the origin. The second inequality is due to Lemma 3.11, which applies because $R_{I} \geq e^{8n}$ . Since the resulting expression increases with $\mathbb {E}_z W$ , we obtain the third inequality by substituting n for $\mathbb {E}_z W$ , as $\mathbb {E}_z W \leq n$ . The fourth inequality follows from $n \geq 1$ .

3.2.4 Inputs to Stage 4 when $K = I$ and Stage 3 when $K\neq I$

The results in this subsection address the last stage of advancement in the two subcases of the case $n_0 \neq n$ : $K = I$ and $K \neq I$ . In the former subcase, the random walk has reached $C(R_1)$ ; in the latter subcase, it has reached $\mathrm {Circ}_{K-1}$ . Both subcases will be addressed by corollaries of the following, known geometric fact, stated in a form convenient for our purposes.

Let $\mathbb {Z}^{2\ast }$ be the graph with vertex set $\mathbb {Z}^2$ and with an edge between distinct x and y in $\mathbb {Z}^2$ when x and y differ by at most $1$ in each coordinate. For $B \subseteq \mathbb {Z}^2$ , we will define the $\ast $ -exterior boundary of B by

(3.18) $$ \begin{align} \partial_{\mathrm{ext}}^\ast B = \{x \in \mathbb{Z}^2: & \,x\text{ is adjacent in}\ \mathbb{Z}^{2\ast}\text{ to some}\ y \in B, \nonumber\\ & \quad \qquad \text{and there is a path from}\ \infty\ \text{to}\ x\text{ disjoint from}\ B\}. \end{align} $$

Lemma 3.13. Let $A \in \mathscr {H}_n$ and $r> 0$ . From any $x \in C(r){\setminus }A$ , there is a path $\Gamma $ in $(A{\setminus }\{o\})^c$ from $\Gamma _1 = x$ to $\Gamma _{|\Gamma |} = o$ with a length of at most $10 \max \{r,n\}$ . Moreover, if $A \subseteq D(r)$ , then $\Gamma $ lies in $D(r+2)$ .

We choose the constant factor of $10$ for convenience; it has no special significance. We use a radius of $r+2$ in $D(r+2)$ to contain the boundary of $D(r)$ in $\mathbb {Z}^{2\ast }$ .

Proof of Lemma 3.13.

Let $\{B_\ell \}_\ell $ be the collection of $\ast $ -connected components of $A{\setminus }\{o\}$ . By Lemma 2.23 of [Reference KestenKes86] (alternatively, Theorem 4 of [Reference TimárTim13]), because $B_\ell $ is finite and $\ast $ -connected, $\partial _{\mathrm {ext}}^\ast B_\ell $ is connected.

Fix $r> 0$ and $x \in C(r){\setminus }A$ . Let $\Gamma $ be the shortest path from x to the origin. If $\Gamma $ is disjoint from $A {\setminus } \{o\}$ , then we are done, as $|\Gamma |$ is no greater than $2r$ . Otherwise, let $\ell _1$ be the label of the first $\ast $ -connected component intersected by $\Gamma $ . Let i and j be the first and last indices such that $\Gamma $ intersects $\partial _{\mathrm {ext}}^\ast B_{\ell _1}$ , respectively. Because $\partial _{\mathrm {ext}}^\ast B_{\ell _1}$ is connected, there is a path $\Lambda $ in $\partial _{\mathrm {ext}}^\ast B_{\ell _1}$ from $\Gamma _i$ to $\Gamma _j$ . We then edit $\Gamma $ to form $\Gamma '$ as

$$\begin{align*}\Gamma' = \left( \Gamma_1, \dots, \Gamma_{i-1}, \Lambda_1, \dots, \Lambda_{|\Lambda|}, \Gamma_{j+1},\dots, \Gamma_{|\Gamma|} \right).\end{align*}$$

If $\Gamma '$ is disjoint from $A {\setminus } \{o\}$ , then we are done, as $\Gamma '$ is contained in the union of $\Gamma $ and $\bigcup _\ell \partial _{\mathrm {ext}}^\ast B_\ell $ . Since $\bigcup _\ell B_\ell $ has at most n elements, $\bigcup _\ell \partial _{\mathrm {ext}}^\ast B_\ell $ has at most $8n$ elements. Accordingly, the length of $\Gamma '$ is at most $2r + 8n \leq 10 \max \{r,n\}$ . Otherwise, if $\Gamma '$ intersects another $\ast $ -connected component of $A {\setminus } \{o\}$ , we can simply relabel the preceding argument to continue inductively and obtain the same bound.

Lastly, if $A \subseteq D(r)$ , then $\bigcup _\ell \partial _{\mathrm {ext}}^\ast B_\ell $ is contained in $D(r+2)$ . Since $\Gamma $ is also contained in $D(r+2)$ , this implies that $\Gamma '$ is contained in $D(r+2)$ .

Lemma 3.13 implies two other results. The first addresses Stage 4 when