Hostname: page-component-848d4c4894-8kt4b Total loading time: 0 Render date: 2024-06-17T16:21:26.847Z Has data issue: false hasContentIssue false

Inhomogeneous Poisson processes in the disk and interpolation

Published online by Cambridge University Press:  30 April 2024

Andreas Hartmann
Univ. Bordeaux, CNRS, Bordeaux INP, IMB, Talence, France
Xavier Massaneda*
Departament de Matemàtiques i Informàtica, Universitat de Barcelona, Barcelona, Catalonia
Corresponding author: Xavier Massaneda, email:
Rights & Permissions [Opens in a new window]


We investigate different geometrical properties, related to Carleson measures and pseudo-hyperbolic separation, of inhomogeneous Poisson point processes on the unit disk. In particular, we give conditions so that these random sequences are almost surely interpolating for the Hardy, Bloch or weighted Dirichlet spaces.

Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (, which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
© The Author(s), 2024. Published by Cambridge University Press on Behalf of The Edinburgh Mathematical Society.

1. Introduction and main results

Important notions in spaces of analytic functions include zero-sets, Carleson measures, interpolation, sampling, frames, etc. Such properties have been studied for many well-known spaces of analytic functions in a deterministic setting. A canonical example is the Hardy space, where all these properties are well established, see [Reference Garnett16]. In certain spaces, such properties admit theoretical characterizations, which are not checkable in general (e.g. interpolation in Dirichlet spaces), while in other situations, a general characterization is not available at all (see e.g. [Reference Seip24] for a general reference on interpolation). In these circumstances, it is useful to consider a random setting, which allows to see whether certain properties are ‘generic’ in a sense. The random model we are interested in here is the Poisson point process.

A Poisson point process in the unit disk $\mathbb{D}$ is a random sequence Λ defined in the following way: for any Borel set $A\subset \mathbb{D}$ the counting random variable $N_A=\# (A\cap\Lambda) $ is well defined and

  • (a) NA is a Poisson random variable, i.e., there exists $\mu(A)\geq 0$ such that the probability distribution of NA is

    \begin{equation*} \mathbb{P}(N_A=k)=\mathrm{e}^{-\mu(A)} \frac{(\mu(A))^k}{k!},\quad k\geq 0. \end{equation*}

    In particular, $\mathbb{E}[N_A]={\rm Var}[N_A]=\mu(A)$.

  • (b) If $A,B\subset\mathbb{D}$ are disjoint Borel sets, then the variables NA and NB are independent.

It turns out that these two properties uniquely characterize the point process. Also, the values $\mu(A)$ define a σ-finite Borel measure on $\mathbb{D}$, which is called the intensity of the process.

The Poisson process is a well-known statistical model for point distributions with no (or weak) interactions, and it has multiple applications in a great variety of fields. Because of property (b), it is clearly not adequate to describe distributions in which each point is not statistically independent of the other points of the process. For such situations, other models have been proposed (e.g. determinantal processes or zeros of Gaussian analytic functions for random sequences with repulsion or Cox processes for situations with positive correlations and clumping [Reference Hough, Krishnapur, Peres and Virág18]).

It is also possible to create a Poisson process from a given, σ-finite, locally finite, positive Borel measure µ in $\mathbb{D}$, in the sense that there exists a point process $\Lambda_\mu$ with intensity µ, i.e, whose counting functions satisfy properties (a) and (b) above. This is a well-known, non-trivial fact that can be found, for example, in [Reference Last and Penrose19, Theorem 3.6]. Such a Poisson process $\Lambda_\mu$ is sometimes called inhomogeneous or non-stationary.

In this paper, given a positive Borel measure µ on $\mathbb{D}$, we study elementary geometric properties of the inhomogeneous Poisson process of intensity µ, specifically in relation to conditions used to describe interpolating sequences for various spaces of analytic functions in $\mathbb{D}$. We shall always assume that $\mu(\mathbb{D})=+\infty$, since otherwise $\Lambda_\mu$ would be finite almost surely.

The probabilistic point of view has already been explored before in connection with interpolation. Here, we mention Cochran [Reference Cochran10] and Rudowicz [Reference Rudowicz22], who considered the probabilistic model $\Lambda=\{r_n e^{i\theta_n}\}_n$ in which the radii $r_n\subset(0,1)$ are fixed a priori and the arguments θn are chosen uniformly and independently in $[0,2\pi]$ (a so-called Steinhaus sequence). For this model, they established a zero-one condition on $\{r_n\}_n$ so that the resulting random sequence is almost surely interpolating for the Hardy spaces. In [Reference Chalmoukis, Hartmann, Kellay and Wick9], similar results, for the same probabilistic model, were proven for the scale of weighted Dirichlet spaces between the Hardy space and the classical Dirichlet space. See also [Reference Dayan, Wick and Wu13] for related results in the unit ball and the polydisk.

We should emphasize that while the previously discussed models involved a deterministic part in fixing a priori a sequence of radii (satisfying de Blaschke condition), the Poisson process is a natural choice to get rid of this deterministic part. We would also like to mention that Poisson processes give rise to new properties and phenomena. For instance, as we prove in this article, Carleson measures for Hardy spaces are completely characterized for Poisson processes, which is not known so far for the case of the radial model. Also, a characterization of almost surely interpolating sequences for the Bloch space in the case of Poisson processes is new and follows quite immediately from our results and previous deterministic characterizations. A last observation concerning the techniques: while these are inspired by those introduced for the radial model, they are not straightforward and have to take into account, in general, the non-radial character of the process.

Our results are given in terms of a function Fµ defined in the following way. Let

\begin{equation*} \rho(z,w)=\Bigl|\frac{z-w}{1-\bar w z}\Bigr|,\ \quad z,w\in\mathbb{D}, \end{equation*}

denote the pseudo-hyperbolic distance in $\mathbb{D}$, and let

\begin{equation*} D(z,r)=\{w\in\mathbb{D} : \rho(z,w) \lt r\},\quad z\in\mathbb{D}, \ r\in (0,1), \end{equation*}

be the discs defined by ρ. Given a positive measure µ in $\mathbb{D}$, define

\begin{equation*} F_\mu(z)=\mu\bigl(D(z,1/2)\bigr),\quad z\in\mathbb{D}. \end{equation*}

It will be clear from the proofs that the analogous results hold if $F_\mu(z)$ is replaced by $F_{\mu,c}(z)=\mu (D(z,c))$, where $c\in (0,1)$ is fixed. We will also need the invariant measure, defined by

(1)\begin{equation} \mathrm{d}\nu(z)=\frac {\mathrm{d}m(z)}{(1-|z|^2)^2}, \end{equation}

where $\mathrm{d}m$ denotes the normalized Lebesgue measure in $\mathbb{D}$. Observe that the measure $F_\mu\,\mathrm{d}\nu$ can be seen as a regularized version of $\mathrm{d}\mu$.

A first geometric property on random sequences we are interested in is separation.

Definition 1.1. A sequence $\Lambda=\{\lambda_k\}_{k\geq 1}\subset \mathbb{D}$ is separated if there exists δ > 0 such that

\begin{equation*} \rho(\lambda_k,\lambda_l)\ge\delta,\quad k\neq l. \end{equation*}

When we need to specify the separation constant we say that Λ is δ-separated.

We are now in a position to state our first result characterizing those $\Lambda_\mu$ which can (almost surely) be expressed as finite unions of separated sequences.

Theorem 1.2. Let $\Lambda_\mu$ be the Poisson process associated with a positive, σ-finite, locally finite measure µ and let $M\geq 1$ be an integer. Then,

\begin{equation*} \mathbb{P}\bigl(\Lambda_\mu\ {union\, of\, {M}\, separated\, sequences}\bigr)= \left\{\begin{array}{ll} 1\quad \textrm{if}\quad \int_{\mathbb{D}} F_\mu^{M+1}(z) d\nu(z) \lt \infty \\ 0\quad \textrm{if}\quad \int_{\mathbb{D}} F_\mu^{M+1}(z) d\nu(z)=\infty. \end{array}\right. \end{equation*}

In particular,

\begin{equation*} \mathbb{P}\bigl(\Lambda_\mu\ {separated}\bigr)= \left\{\begin{array}{ll} 1\quad \textrm{if}\quad \int_{\mathbb{D}} F_\mu^{2}(z) d\nu(z) \lt \infty \\ 0\quad \textrm{if}\quad \int_{\mathbb{D}} F_\mu^{2}(z) d\nu(z)=\infty. \end{array}\right. \end{equation*}

The characterization of a.s. separated sequences was first obtained, with a different proof, in [Reference Aparicio Monforte2, Theorem 3.2.1].

Remark 1.3. As will be explained in § 2, the conditions $\int_{\mathbb{D}} F_\mu^{\gamma}(z)\,\mathrm{d}\nu(z) \lt \infty$, γ > 1, have equivalent discrete formulations in terms of the standard dyadic partition of $\mathbb{D}$ (see § 2 for the corresponding notation and in particlar Proposition 2.1 for the equivalent reformulation). We choose to write the statements here in terms of Fµ for the sake of simplicity. However, we will only use the dyadic discretization in the proofs. This is ultimately due to property (b) of the Poisson process (the independence of the counting random variables associated with disjoint dyadic regions), which allows for simple computations and the free use of the first and second Borel–Cantelli lemmas (which are recalled in Lemma 1.7).

Our second result deals with so-called α-Carleson sequences. Given any arc $I\subset \mathbb{T}=\partial \mathbb{D}$, let $|I|$ denotes its normalized length and consider the associated Carleson window

\begin{equation*} Q(I)=\bigl\{z=r\mathrm{e}^{\mathrm{i}\theta}\in \mathbb{D} : r \gt 1-|I|, \, \mathrm{e}^{\mathrm{i}\theta}\in I\bigr\}. \end{equation*}

Definition 1.4. Let $\alpha\in (0, 1]$. The sequence Λ satisfies the α-Carleson condition if there exists C > 0 such that for all arcs $I\subset\mathbb{T}$

\begin{equation*} \sum_{\lambda\in Q(I)} (1-|\lambda|)^\alpha \leq C |I|^\alpha. \end{equation*}

Such sequences will also be called α-Carleson sequences.

The sequences Λ satisfying the 1-Carleson condition are by far the most studied, because of their role in the famous characterization of the interpolating sequences for the algebra $H^\infty$ of bounded holomorphic functions, given by Carleson [Reference Carleson8] (see § 4). They are sometimes found in the literature under the name of Carleson-Newman sequences.

The α-Carleson property above is a special case of a more general condition: a finite, positive Borel measure σ on $\mathbb{D}$ is a Carleson measure of order $\alpha\in (0,1]$ if $\sigma(Q(I))\le C|I|^\alpha$ for some C > 0 and all intervals I. As shown by Carleson (see e.g. [Reference Garnett16]), Carleson measures (of order 1) are precisely those for which the embedding $H^2\subset L^2(\mathbb{D},\sigma)$ holds; here, H 2 is the classical Hardy space (see the definition in § 4.1). Carleson measures of order α < 1 have been used, for example, in providing sufficient conditions for solvability of the $\bar\partial_b$-equation in Lp, $L^{p,\infty}$ and in Lipschitz spaces of the boundary of strictly pseudoconvex domains [Reference Amar and Bonami3].

Theorem 1.5. Let $\Lambda_\mu$ be the Poisson process associated with a positive, σ-finite, locally finite measure µ. Then

  • (a)

    \begin{align*} & \mathbb{P}\bigl(\Lambda_\mu\ {is\, a\, 1-Carleson\, sequence}\bigr)\\ &\qquad\qquad\quad = \left\{\begin{array}{ll} 1\quad \textrm{if $\, \exists\, \gamma \gt 1$ such that}\ \int_{\mathbb{D}} F_\mu^{\gamma}(z) d\nu(z) \lt \infty, \\ 0\quad \textrm{if $\quad \int_{\mathbb{D}} F_\mu^{\gamma}(z) d\nu(z)=\infty$ for all }{{\gamma}\, \gt \,1.} \end{array}\right. \end{align*}
  • (b) Let $\alpha\in (0,1)$. If there exists $1 \lt \gamma \lt \frac 1{1-\alpha}$ such that $ \int_{\mathbb{D}} F_\mu^{\gamma} (z)\, d\nu(z) \lt \infty$, then

    \begin{equation*} \mathbb{P}\bigl(\Lambda_\mu\ {is\, {\alpha}-Carleson}\bigr)=1 \end{equation*}
  • (c) There exists a positive, σ-finite, locally finite measure µ such that $\int_{\mathbb{D}} F_\mu^{1/(1-\alpha)}(z)\, d\nu(z) \lt \infty$ and

    \begin{equation*} \mathbb{P}\bigl(\Lambda_\mu\ {is\, {\alpha}-Carleson}\bigr)=0. \end{equation*}
  • (d) For every γ > 1 there exists a positive, σ-finite, locally finite measure µ such that $\int_{\mathbb{D}} F_\mu^{\gamma}(z)\, d\nu(z)=+\infty$ but

    \begin{equation*} \mathbb{P}\bigl(\Lambda_\mu\ {is\, {\alpha}-Carleson}\bigr)=1 \end{equation*}

    for all $\alpha\in (0,1)$.

Remark 1.6.

(1) The first statement in part (a) is connected with the first part of the statement in Theorem 1.2, since it is a well-known fact that every 1-Carleson (or Carleson-Newman) sequence can be split into a finite number of separated sequences, each of which being of course 1-Carleson [Reference McDonald and Sundberg20, Lemma 21] (obviously a finite number of arbitrary separated sequences may not be Carleson-Newman). However, Theorem 1.5(a) does not give a precise information on the number of separated sequences involved. It is also mentionable that the condition for a.s. separation from Theorem 1.2 implies automatically the Carleson condition (picking $\gamma=2 \gt 1$). This is perhaps more surprising and may be explained by the nature of the process: the independence of the different points allows for big fluctuations, so the probability of finding pairs of points arbitrarily close is quite big unless the number of points in the process is restricted severely (up to $\int_{\mathbb{D}} F_\mu^{2} (z)\,\mathrm{d}\nu(z) \lt \infty$).

(2) It is interesting to point out that for the inhomogeneous Poisson process we have a characterization of 1-Carleson sequences, while in the a priori simpler random model with fixed radii and random arguments there is only a sufficient – still optimal – condition (see [Reference Chalmoukis, Hartmann, Kellay and Wick9, Theorem 1.4]).

(3) In the case $\alpha\in (0,1)$, the results are less precise than when α = 1. The value $1/(1-\alpha)$ turns out to be an optimal breakpoint but nothing specific can be said beyond this value without additional conditions on the distribution of µ. The example given in (c) is part of a certain parameter-dependent scale of measures which will be discussed in § 5 and for which the α-Carleson condition is characterized in terms of the parameter.

(4) Our conditions, both here and in Theorem 1.2, are expressed in terms of the measure function $F_\mu(z)=\mu\bigl(D(z,1/2)\bigr)$. Redistributing continuously µ near z (with a convolution) if necessary, we can always assume that µ is absolutely continuous with respect to the Lebesgue measure.

(5) It is tempting to try to establish links between the growth of the intensity measure and numerical estimates of the separation or the Carleson measure constants. However, most of our results are based on the Borel–Cantelli lemma (see Lemma 1.7), which intrinsically does not give any information on the finite number of events which do not fulfil the underlying conditions (separation by a given constant or bounds of the Carleson measure constants), neither how far they are from satisfying those conditions.

The structure of the paper is as follows. As mentioned earlier, our proofs are written in terms of the standard dyadic discretization of µ, which is introduced in § 2. In particular, we give the equivalent discrete formulations of the integral conditions for $F_\mu(z)$ given in the statements above. In § 3, we prove the main Theorems 1.2 and 1.5. Section 4 deals with the consequences of these results in the study of interpolating sequences for various spaces of holomorphic functions. In particular, we find precise conditions so that a Poisson process $\Lambda_\mu$ is almost surely an interpolating sequence for the Hardy spaces Hp, $0 \lt p\leq\infty$, the Bloch space $\mathcal B$ or the Dirichlet spaces $\mathcal D_\alpha$, $\alpha\in (1/2,1)$. A final section is devoted to provide examples of Poisson processes associated with some simple measures.

We finish this introduction recalling the Borel–Cantelli lemma (first and second part), which is a central tool in this paper. We refer to [Reference Billingsley4] for a general source on probability theory. Given a sequence of events Ak, let $\limsup A_k=\{\omega:\omega\in A_k$ for infinitely many $k\}$.

Lemma 1.7. Let $(A_k)_k$ be a sequence of events in a probability space. Then,

  1. (1) If $\sum \mathbb{P}(A_k) \lt \infty$, then $\mathbb{P}(\limsup A_k)=0$,

  2. (2) If the events Ak are independent and $\sum \mathbb{P}(A_k)=\infty$, then $\mathbb{P}(\limsup A_k)=1$.

Acknowledgements: The authors would like to thank Joaquim Ortega-Cerdà, for proposing the study of Poisson processes and for indicating the equivalence between the continuous and discrete conditions (Proposition 2.1), and the referee, for the careful reading of the manuscript.

2. Discretization of the integral conditions

Our proofs are written in terms of the following standard dyadic discretization of µ.

Consider first the dyadic annuli

\begin{equation*} A_n=\{z\in \mathbb{D}:2^{-(n+1)} \lt 1-|z|\leq 2^{-n}\}, \quad n\geq 0. \end{equation*}

Each An can be split into 2n boxes of the same size $2^{-n}$:

\begin{equation*} T_{n,k}=\bigl\{z=r\mathrm{e}^{\mathrm{i}t}\in A_n: \frac{k}{2^n}\le \frac t{2\pi} \lt \frac{k+1}{2^n}\bigr\},\quad k=0,1,\ldots,2^n-1. \end{equation*}

These boxes can be viewed as the top halves of the Carleson windows

\begin{equation*} Q(I_{n,k})=\bigl\{z=r\mathrm{e}^{\mathrm{i}\theta}\in \mathbb{D} : r \gt 1-2^{-n}, \, \mathrm{e}^{\mathrm{i}\theta}\in I_{n,k}\bigr\} \end{equation*}

associated with the dyadic intervals

(2)\begin{equation} I_{n,k}=\bigl\{ \mathrm{e}^{\mathrm{i}t}\in\mathbb{T} : \frac{k}{2^n}\le \frac t{2\pi} \lt \frac{k+1}{2^n}\bigr\}\ ,\quad n\geq 0\ ,\, k=0,1,\ldots, 2^{n}-1. \end{equation}

Figure 1. Carleson window $Q(I_{n,k})$ associated with the dyadic interval $I_{n,k}$ and its top half $T_{n,k}$.

Observe also that there exist constants $c_1,c_2\in(0,1)$ such that

\begin{equation*} D(z_{n,k},c_1)\subset T_{n,k}\subset D(z_{n,k},c_2), \end{equation*}

where $z_{n,k}$ is the centre of $T_{n,k}$ (explicitly $z_{n,k}=(1-\frac 32\, 2^{-(n+1)})\, \mathrm{e}^{2\pi \frac{2k+1}{2^n}}$). In particular, by the invariance of ν by automorphisms of $\mathbb{D}$, there exist constants $C_1,C_2 \gt 0$ such that

\begin{equation*} C_1\leq\nu(T_{n,k})\leq C_2,\quad n\geq 0, \, k=0,\dots,2^n-1. \end{equation*}

Denote $X_{n,k}=N_{T_{n,k}}$, which by hypothesis is a Poisson random variable of parameter

\begin{equation*} \mu_{n,k}:=\mathbb{E}[X_{n,k}]={\rm Var} [X_{n,k}]= \mu(T_{n,k}). \end{equation*}

In these terms, the assumption $\mu(\mathbb{D})=+\infty$ is just

\begin{equation*} \mu(\mathbb{D})=\sum_{n\in\mathbb{N}} \sum_{k=0}^{2^n-1} \mu_{n.k}=\sum_{n,k}\mu_{n,k}=+\infty. \end{equation*}

The integral conditions given in the theorems above have the following discrete reformulation, which will be used throughout the proofs in the forthcoming sections.

Proposition 2.1. Let µ be a positive, locally finite, σ-finite measure µ on the unit disk and let γ > 1. The following two conditions are equivalent:

  • (a) $\int_{\mathbb{D}} F_{\mu}^\gamma(z) d\nu(z) \lt +\infty$,

  • (b) $\sum\limits_{n,k} \mu_{n,k}^\gamma \lt +\infty$.

Proof. We first remark that condition (b) is equivalent to its analogue where instead of the dyadic partition $\{T_{n,k}\}_{n,k}$ a ‘δ-adic’ partition of $\mathbb{D}$ is considered. More precisely, let $\delta\in (0,1)$ and consider the δ-adic rings

\begin{equation*} A_n(\delta)=\{z\in\mathbb{D} : \delta^{n+1} \lt 1-|z|\leq \delta^n\}\ ,\quad n\geq 0 \end{equation*}

and the boxes

\begin{equation*} T_{n,k}(\delta)=\{z=r\mathrm{e}^{\mathrm{i}t}\in A_n(\delta) : k\leq \frac t{2\pi} [1/\delta] \lt k+1\}, \, k=0,\dots,[1/\delta]-1. \end{equation*}

Each $T_{n,k}(\delta)$ is contained in at most a finite number – depending only on δ, but not on (n, k)– of $T_{m,j}$, and reciprocally each $T_{m,j}$ is contained in at most a finite number of $T_{n,k}(\delta)$. This shows that for any given γ > 1, and letting $\mu_{n,k}(\delta)=\mu\bigl(T_{n,k}(\delta)\bigr)$,

\begin{equation*} \sum\limits_{n,k} \mu_{n,k}^\gamma \lt +\infty\ \Longleftrightarrow\ \sum\limits_{n,k} \mu_{n,k}^\gamma(\delta) \lt +\infty. \end{equation*}

(a)$\Rightarrow$(b). Take $\delta\in (0,1)$ small enough so that $T_{n,k}(\delta)\subset D(z,1/2)$ for all $z\in T_{n,k}(\delta)$. Then, $\mu_{n,k}(\delta)\leq F_\mu(z)$ for all such z and

\begin{align*} \sum\limits_{n,k} \mu_{n,k}^\gamma(\delta)&\lesssim \sum_{n,k} \int_{T_{n,k}(\delta)} F_\mu^\gamma(z)\,\mathrm{d}\nu(z). \end{align*}

Since $\nu(T_{n,k}(\delta))$ is bounded above and below by constants depending only on δ (but not on (n, k)), we have

\begin{equation*} \sum\limits_{n,k} \mu_{n,k}^\gamma(\delta)\lesssim \sum_{n,k} \int_{T_{n,k}(\delta)} F_\mu^\gamma(z)\,\mathrm{d}\nu(z)=\int_{\mathbb{D}} F_\mu^\gamma(z)\,\mathrm{d}\nu(z). \end{equation*}

(b)$\Rightarrow$(a). Observe that for $z\in T_{n,k}$ the disc $D(z,1/2)$ is contained in the union of $T_{n,k}$ and its eight adjacent boxes $T_{m,j}$. Let us denote by $T_{n,k}^j$, $j=0,\dots, 8$ these boxes (being $T_{n,k}=T_{n,k}^0$). Then,

\begin{align*} \int_{\mathbb{D}} F_\mu^\gamma(z)\,\mathrm{d}\nu(z)&\leq \sum_{n,k}\int_{T_{n,k}} \mu^\gamma(D(z,1/2))\,\mathrm{d}\nu(z)\lesssim \sum_{n,k} \left(\sum_{j=0}^8 \mu(T_{n,k}^j)\right)^\gamma \\ &\lesssim \sum_{n,k} \left(\sum_{j=0}^8 \mu^\gamma (T_{n,k}^j)\right)\lesssim \sum_{n,k} \mu_{n,k}^\gamma. \end{align*}

3. Proof of Theorems 1.2 and  1.5

Proof of Theorem 1.2

Assume first that $\int_{\mathbb{D}} F^{M+1} (z)\, d\nu(z) \lt \infty$ or, equivalently, that $\sum_{n,k}\mu_{n,k}^{M+1} \lt +\infty$. Define the events

\begin{equation*} A_{n,k}=\{X_{n,k} \gt M\}=\{X_{n,k}\ge M+1\}. \end{equation*}


\begin{equation*} \mathbb{P}(A_{n,k})=1-\sum_{j=0}^M \mathbb{P}(X_{n,k}=j)=1-\mathrm{e}^{-\mu_{n,k}} \bigl(\sum_{j=0}^M \frac{\mu_{n,k}^j}{j!}\bigr). \end{equation*}

By hypothesis $\lim\limits_n(\sup_k\mu_{n,k})= 0$, so we can use Taylor’s formula

(3)\begin{equation} 1-\mathrm{e}^{-x}(\sum_{j=0}^M\frac{x^j}{j!})=\frac{x^{M+1}}{(M+1)!}+o(x^{M+1})\qquad x\to 0 \end{equation}

to deduce that

\begin{equation*} \sum_{n,k} \mathbb{P}(A_{n,k})\lesssim \sum_{n,k}\frac{\mu_{n,k}^{M+1}}{(M+1)!} \lt +\infty. \end{equation*}

By the Borel–Cantelli lemma almost surely $X_{n,k}\leq M$ for all but at most a finite number of $T_{n,k}$.

In principle, this does not imply that $\Lambda_\mu$ can be split into M separated sequences, because it might happen that points of two neighbouring $T_{n,k}$ come arbitrarily close. This possibility is excluded by repeating the above arguments to a new dyadic partition, made of shifted boxes $\tilde{T}_{n,k}$ having the ‘lower vertices’ (those closer to $\mathbb{T}$) at the centre of the $T_{n,k}$’s (see Figure 2); let

\begin{equation*} \tilde T_{n,k}=\Bigl\{z=re^{it}: \frac 32 2^{-(n+2)} \lt 1-r\leq \frac 32 2^{-(n+1)}\, ;\ \frac{k+1/4}{2^n}\le \frac t{2\pi} \lt \frac{k+3/4}{2^n}\Bigr\}. \end{equation*}

Figure 2. Dyadic partitions: $\{T_{n,k}\}_{n,k}$ in blue, $\{\tilde T_{n,k}\}_{n,k}$ in red.

Since each $\tilde T_{n,k}$ is included in the union of at most four $T_{m,j}$, we still have $\sum_{n,k} \tilde\mu_{n,k}^{M+1} \lt \infty$, and therefore, as before, $\tilde X_{n,k}=N_{\tilde T_{n,k}}$ is at most M, except for maybe a finite number of indices (n, k). This prevents that two adjacent $T_{n,k}$ have more than M points getting arbitrarily close. In conclusion, for all but a finite number of indices $X_{n,k}\leq M$, hence the part of $\Lambda_\mu$ in these boxes can be split into M separated sequences. Adding the remaining finite number of points to any of these sequences may change the separation constant but not the fact that they are separated.

Assume now that $\sum_{n,k}\mu_{n,k}^{M+1}=+\infty$. We shall prove that for every $\delta_{l_0}=2^{-l_0}$, $l_0\in\mathbb{N}$,

\begin{equation*} \mathbb{P}\bigl({{\Lambda}}\,\, {\mathrm{union\,\, of\,\, {\mathit{M}}\,\, \delta_{l_0}-separated\,\, sequences}}\bigr)=0. \end{equation*}

Split each side of $T_{n,k}$ into $2^{l_0}$ segments of the same length. This defines a partition of $T_{n,k}$ in $2^{2l_0}$ small boxes of side length $2^{-n} 2^{-l_0}$, which we denote by

\begin{equation*} T_{n,k}^{l_0,j}\qquad j=1,\dots, 2^{2l_0}. \end{equation*}

Let $X_{n,k}^{l_0,j}=N_{T_{n,k}^{l_0,j}}$ denote the corresponding counting variable, which follows a Poisson law of parameter $\mu_{n,k,l_0,j}=\mu(T_{n,k}^{l_0,j})$.

It is enough to show that for any l 0,

\begin{equation*} \mathbb{P}(X_{n,k}^{l_0,j} \gt M \text{ for infinitely many }n,k,j)=1. \end{equation*}

By the second part of the Borel–Cantelli lemma, since the $X_{n,k}^{l_0,j}$ are independent, we shall be done as soon as we see that

(4)\begin{equation} \sum_{n,k}\sum_{j=1}^{2^{2 l_0}} \mathbb{P}\bigl(X_{n,k}^{l_0,j}\ge M+1\bigr)=+\infty. \end{equation}

For any Poisson variable X of parameter λ, the probability

\begin{equation*} \mathbb{P}(X\ge M+1)=\mathrm{e}^{-\lambda}\bigl(\sum_{m=M+1}^\infty \frac{\lambda^m}{m!}\bigr)=1-\mathrm{e}^{-\lambda}\bigl(\sum_{m=0}^M \frac{\lambda^m}{m!}\bigr) \end{equation*}

increases in λ. Hence, there is no restriction in assuming that $0\leq\mu_{n,k,l_0,j}\leq \mu_{n,k}\leq 1/2$ for all $n,k,j$. Then, we can use Taylor’s formula (3) to deduce that

\begin{equation*} \mathbb{P}\bigl(X_{n,k}^{l_0,j}\ge M+1\bigr)\simeq \frac{\mu_{n,k,l_0,j}^{M+1}}{(M+1)!}, \end{equation*}

and therefore, (4) is equivalent to

\begin{equation*} \sum_{n,k}\sum_{j=1}^{2^{2l_0}} \mu_{n,k,l_0,j}^{M+1}=+\infty. \end{equation*}

The fact that this sum is infinite is just a consequence of the hypothesis and Jensen’s inequality

\begin{align*} \mu_{n,k}^{M+1}=\Bigl(\sum_{j=1}^{2^{2l_0}} \mu_{n,k,l_0,j}\Bigr)^{M+1} \leq 2^{2l_0 (M+1)} \sum_{j=1}^{2^{2l_0}} \frac{\mu_{n,k,l_0,j}^{M+1}}{2^{2l_0}} \leq 2^{2l_0 M} \sum_{j=1}^{2^{2l_0}} \mu_{n,k,l_0,j}^{M+1}. \end{align*}

Proof of Theorem 1.5

(a) Assume first that $\sum_n \mu_{n,k}^{\gamma} \lt +\infty$ for some γ > 1. Observe that it is enough to check the Carleson condition

\begin{equation*} \sum_{\lambda\in Q(I)} (1-|\lambda|)\leq C |I| \end{equation*}

on the dyadic intervals $I_{n,k}$ given in (2). Let $Q_{n,k}=Q(I_{n,k})$. Decomposing the sum on the different layers Am, it is enough to show that almost surely there exists C > 0 such that for all $n\geq 0$, $k=0,\dots, 2^{n-1}$

\begin{equation*} \sum_{\lambda\in Q_{n,k}} (1-|\lambda|)\simeq \sum_{m\geq n} \sum_{j: T_{m,j}\subset Q_{n,k}} 2^{-m} X_{m,j}\leq C 2^{-n} . \end{equation*}

This is equivalent to

(5)\begin{equation} \sup_{n,k}\ 2^n\sum_{m\geq n} \sum_{j: T_{m,j}\subset Q_{n,k}} 2^{-m} X_{m,j} \lt \infty. \end{equation}


\begin{equation*} X_{n,m,k}=N_{Q_{n,k}\cap A_m}=\#(\Lambda\cap Q_{n,k}\cap A_m)=\sum_{j: T_{m,j}\subset Q_{n,k}} X_{m,j}, \end{equation*}

which is a Poisson variable of parameter

\begin{equation*} \mu_{n,m,k}=\mu(Q_{n,k}\cap A_m)=\sum_{j: T_{m,j}\subset Q_{n,k}} \mu_{m,j}. \end{equation*}


\begin{equation*} Y_{n,k}=2^n\sum_{m\ge n}2^{-m}X_{n,k,m}=\sum_{m\ge n}2^{n-m}X_{n,k,m}, \end{equation*}

so that (5) becomes $\sup_{n,k} Y_{n,k} \lt +\infty$.

Let A > 0 be a big constant to be fixed later on. Again by the Borel–Cantelli Lemma, it is enough to show that

(6)\begin{equation} \sum_{n,k} \mathbb{P}\bigl(Y_{n,k} \gt A\bigr) \lt +\infty, \end{equation}

since then $Y_{n,k}\leq A$ for all but maybe a finite number of $n,k$; in particular, $\sup_{n,k} Y_{n,k} \lt \infty$.

The first step of the following reasoning is an adaptation to the Poisson process of the proof given in [Reference Chalmoukis, Hartmann, Kellay and Wick9, Theorem 1.1] and which allowed to improve the result on Carleson sequences for the probabilistic model with fixed radii and random arguments. However, while in the original proof, the Carleson boxes $Q_{n,k}$ are decomposed into layers $Q_{n,k}\cap A_m$ ($m\ge n$), in this new situation (as well as for (b)), Carleson boxes are decomposed into top-halves $T_{m,j}\subset Q_{n,k}$, which requires more delicate arguments to reach the convergence needed in the Borel–Cantelli lemma.

Recall that the probability generating function of a Poisson variable X of parameter λ is $\mathbb{E}(s^X)=\mathrm{e}^{\lambda(s-1)}$. By the independence of the different $X_{n,k,m}$, $m\ge n$,

\begin{equation*} \mathbb{E}(s^{Y_{n,k}})=\prod_{m\ge n}\mathbb{E}((s^{2^{n-m}})^{X_{n,m,k}}) =\prod_{m\ge n}\mathrm{e}^{\mu_{n,m,k}(s^{2^{n-m}}-1)}. \end{equation*}

Thus for any s > 1, by Markov’s inequality

\begin{equation*} \mathbb{P}(Y_{n,k} \gt A)=\mathbb{P}(s^{Y_{n,k}} \gt s^A) \le \frac{1}{s^A}\mathbb{E}(s^{Y_{n,k}})=\frac{1}{s^A}\prod_{m\ge n}\mathrm{e}^{\mu_{n,m,k}(s^{2^{n-m}}-1)}. \end{equation*}

Using the estimate $x(a^{1/x}-1)\le a$, for $a,x \gt 1$, with a = s and $x=2^{m-n}$,

\begin{align*} \log \mathbb{P}(Y_{n,k} \gt A)&\le -A\,\log s+\sum_{m\ge n}(s^{2^{n-m}}-1)\, \mu_{n,m,k}\\ &\le -A\log s+\sum_{m\ge n} s 2^{n-m} \, \mu_{n,m,k}\\ &=-A\,\log s+s\sum_{m\ge n} 2^{n-m} \sum_{j:T_{m,j}\subset Q_{n,k}}\mu_{m,j}. \end{align*}

We want to optimize this estimate for s > 1. Set

\begin{equation*} B_{n,k}=\sum_{m\ge n} 2^{-(m-n)} \sum_{j:T_{m,j}\subset Q_{n,k}}\mu_{m,j} \end{equation*}

and define

\begin{equation*} \phi (s)=-A\,\log s+s B_{n,k}. \end{equation*}

Let us observe first that the $B_{n,k}$ are uniformly bounded (they actually tend to 0). Indeed, let β denote the conjugate exponent of γ ($\frac 1{\gamma}+\frac 1{\beta}=1$). Since, for $m\geq n$, there are $2^{m-n}$ boxes $T_{m,j}$ in $Q_{n,k}$, by Hölder’s inequality on the sum in the index j we deduce that

\begin{align*} B_{n,k}&\le \sum_{m\ge n}2^{-(m-n)}\Bigl(\sum_{j:T_{m,j}\subset Q_{n,k}} \mu_{m,j}^{\gamma} \Bigr)^{1/\gamma}2^{(m-n)/\beta}\\ &=\sum_{m\ge n}2^{-(m-n)/\gamma}\Bigl(\sum_{j:T_{m,j}\subset Q_{n,k}} \mu_{m,j}^{\gamma} \Bigr)^{1/\gamma} \lt +\infty. \end{align*}

Taking A big enough we see that the minimum of ϕ is attained at $s_0=A/B_{n,k} \gt 1$. Hence,

\begin{equation*} \log \mathbb{P}(Y_{n,k} \gt A) \le \phi(s_0)=-A\,\log\frac{A}{B_{n,k}}+A. \end{equation*}


\begin{equation*} \mathbb{P}\bigl(Y_{n,k} \gt A\bigr)\le \left(\frac{B_{n,k}}{A}\right)^A\mathrm{e}^A, \end{equation*}


\begin{equation*} \sum_{n,k} \mathbb{P}(Y_{n,k} \gt A) \le \left(\frac{e}{A}\right)^A\sum_{n,k}B_{n,k}^A. \end{equation*}

The estimate on $B_{n,k}$ obtained previously is not enough to prove that this last sum converges. In order to obtain a better estimate take p > 1, to be chosen later on, its conjugate exponent q (i.e. $\frac 1p+\frac 1q=1$), and apply Hölder’s inequality in the following way:

\begin{align*} B_{n,k}&=\sum_{\substack{m\geq n\\ j:T_{m,j}\subset Q_{n,k}}} 2^{-(m-n)}\mu_{m,j} =2^n \sum_{\substack{m\geq n\\ j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac mp}2^{-\frac mq}\mu_{m,j}\\ &\le 2^n\Bigl(\sum_{\substack{m\geq n\\ j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\beta}p}\Bigr)^{1/\beta} \times\Bigl(\sum_{\substack{m\geq n\\ j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\gamma}q}\mu^{\gamma}_{m,j}\Bigr)^{1/\gamma}. \end{align*}

Choose now p so that $1 \lt p \lt \beta$; then,

\begin{align*} \sum_{\substack{m\geq n\\ j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\beta}p} &=\sum_{m=n}^{\infty} 2^{-\frac{m\beta}p} \ 2^{m-n} =2^{-n}\sum_{m=n}^{\infty}2^{-m(\frac{\beta}p-1)} \simeq 2^{-n}2^{-n(\frac{\beta}p-1)}=2^{-n\frac{\beta}p}. \end{align*}

Thus, from the above estimate,

\begin{equation*} B_{n,k}\le 2^{\frac nq} \Bigl(\sum_{\substack{m\geq n\\ j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\gamma}q}\ \mu^{\gamma}_{m,j} \Bigr)^{1/\gamma}. \end{equation*}

Choosing $A=\gamma$ yields

\begin{equation*} \sum_{n,k}B_{n,k}^\gamma \le \sum_{n,k} 2^{\frac{n\gamma}q} \sum_{\substack{m\geq n\\ j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\gamma}q}\ \mu^{\gamma}_{m,j} . \end{equation*}

We now apply Fubini’s theorem to exchange the sums. The important observation here is that each $T_{m,j}$ has only one ancestor at each level $n\le m$ (i.e., one $T_{n,k}$ containing $T_{m,j}$). Hence,

\begin{align*} \sum_{n,k} B_{n,k}^{\gamma}&\le\sum_{m,j} 2^{-\frac{m\gamma}q}\ \mu^{\gamma}_{m,j} \sum_{\substack{n\leq m\\ k : Q_{n,k} \supseteq T_{m,j}}} 2^{\frac{n\gamma}p} =\sum_{m,j} 2^{-\frac{m\gamma}q}\ \mu^{\gamma}_{m,j} \sum_{n\le m} 2^{\frac {n\gamma} q}\\ &\le 2\sum_{m,j} 2^{-\frac{m\gamma}q} \mu^{\gamma}_{m,j}\ 2^{\frac{m\gamma}q} =2\sum_{m,j} \mu^{\gamma}_{m,j}. \end{align*}

This finishes the proof of (6), hence of this part of the theorem.

Let us now assume that $\sum_{n,k}\mu_{n,k}^{\gamma} =+\infty$ for every γ > 1. Suppose $M\ge 1$ is an integer. Since the sum diverges for $\gamma=M+1$, Theorem 1.2 implies that the sequence $\Lambda_{\mu}$ is almost surely not a union of M separated sequences. In particular, there is $\lambda_0\in\Lambda_{\mu}$ such that $D_{\lambda_0}=\{z\in \mathbb{D}:\rho(\lambda_0,z) \lt 1/2\}$ contains at least M + 1 points of $\Lambda_{\mu}$. Then, letting $I_{\lambda_0}$ be the interval centred at $\lambda_0/|\lambda_0|$ with length $1-|\lambda_0|$, we have $\sum_{\lambda\in Q(I_{\lambda_0})}(1-|\lambda|) \gt rsim M |I_{\lambda_0}|$, where the underlying constant does not depend on M or λ 0. This being true for every integer $M\ge 1$, the sequence cannot be 1-Carleson.

(b) Proceeding as in the first implication of (a) we see that it is enough to prove that almost surely

(7)\begin{equation} \sup_{n,k} Y_{n,k} \lt +\infty\ , \end{equation}

where now

(8)\begin{equation} \quad Y_{n,k}=2^{n\alpha}\sum_{m\geq n} 2^{-m\alpha} \sum_{j: T_{m,j}\subset Q_{n,k}} X_{m,j}. \end{equation}

The same estimates as in (a) based on the probability generating function yield, for s > 1,

\begin{equation*} \log \mathbb{P}\bigl(Y_{n,k}\geq A\bigr)\leq \phi(s)=-A\,\log s+ B_{n,k}, \end{equation*}

where now

\begin{equation*} B_{n,k}=\sum_{m\geq n} 2^{-(m-n)\alpha} \sum_{j: T_{m,j}\subset Q_{n,k}} \mu_{m,j}. \end{equation*}

As in (a), the hypotheses imply that $B_{n,k}$ is uniformly bounded: letting β denote the conjugate exponent to γ ($\frac 1{\gamma}+\frac 1{\beta}=1$) and noticing that $\alpha-1/\beta=1/\gamma-(1-\alpha) \gt 0$,

\begin{align*} B_{n,k}&\leq \sum_{m\geq n} 2^{-(m-n)\alpha} \Bigl(\sum_{j: T_{m,j}\subset Q_{n,k}} \mu_{m,j}^\gamma\Bigr)^{1/\gamma} \ 2^{(m-n)/\beta} \\ &= \sum_{m\geq n} 2^{-(m-n)(\alpha-1/\beta)} \Bigl(\sum_{j: T_{m,j}\subset Q_{n,k}} \mu_{m,j}^\gamma\Bigr)^{1/\gamma}. \end{align*}

Therefore, optimizing the estimate for s > 1 exactly as we did in (a), we obtain $\mathbb{P}\bigl(Y_{n,k}\geq A\bigr)\lesssim B_{n,k}^A$, and we are lead to prove that for some A > 0

(9)\begin{equation} \sum_{n,k} \mathbb{P}\bigl(Y_{n,k}\geq A\bigr)\lesssim \sum_{n,k} B_{n,k}^A \lt \infty. \end{equation}

Again, we introduce an auxiliary weight p – to be determined later – and its conjugate exponent q. Split $2^{-m\alpha}=2^{-\frac{m\alpha}p}2^{-\frac{m\alpha}q}$ and use Hölder’s inequality to obtain

\begin{align*} B_{n,k}& \le 2^{n\alpha}\Bigl(\sum_{\substack{m\geq n\\ j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\alpha\beta}p}\Bigr)^{1/\beta} \times\Bigl(\sum_{\substack{m\geq n\\ j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\alpha\gamma}q}\mu^{\gamma}_{m,j}\Bigr)^{1/\gamma}. \end{align*}

The first sum is finite: since by hypothesis $\alpha\beta=\frac{\alpha \gamma}{\gamma-1} \gt 1$, there exists $1 \lt p \lt \frac{\alpha \gamma}{\gamma-1}$ and

\begin{align*} \sum_{\substack{m\geq n\\ j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\alpha\beta}p}&=\sum_{m\geq n} 2^{-\frac{m\alpha\beta}p} 2^{m-n} =2^{-n} \sum_{m\geq n} 2^{-m(\frac{\alpha\beta}p-1)}\simeq 2^{-n\frac{\alpha\beta}p}. \end{align*}

This implies that

\begin{align*} B_{n,k}^\gamma\lesssim 2^{n\alpha\frac{\gamma}{q}} \sum_{\substack{m\geq n\\ j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\alpha\gamma}{q}}\mu^{\gamma}_{m,j} \end{align*}

and we can conlude the proof of (9) as before:

\begin{align*} \sum_{n,k} B_{n,k}^\gamma &\lesssim\sum_{n,k} 2^{n\alpha\frac{\gamma}{q}} \sum_{\substack{m\geq n\\ j:T_{m,j}\subset Q_{n,k}}} 2^{-\frac{m\alpha\gamma}q} \mu^{\gamma}_{m,j} =\sum_{m,j} \mu^{\gamma}_{m,j} 2^{-m\alpha\frac{\gamma}{q}} \sum_{\substack{n\leq m\\ k: Q_{n,k} \supseteq T_{m,j} }} 2^{n\alpha\frac{\gamma}{q}}\\ &=\sum_{m,j} \mu^{\gamma}_{m,j} 2^{-m\alpha\frac{\gamma}{q}} \sum_{n\leq m} 2^{-n\alpha\frac{\gamma}{q}}\simeq \sum_{m,j} \mu^{\gamma}_{m,j} \lt +\infty. \end{align*}

(c) Here, we give a measure µ for which $\sum_{n,k}\mu_{n,k}^{1/(1-\alpha)} \lt +\infty$ but $\mathbb{P}(\Lambda_\mu\ \textrm{is } \alpha\textrm{-Carleson})=0$. Let

\begin{equation*} \mathrm{d}\mu(z)= \frac{\mathrm{d}m(z)}{(1-|z|^2)^{\alpha+1}\,\log\bigl(\frac {\mathrm{e}}{1-|z|^2}\bigr)}= \frac{\mathrm{d}\nu(z)}{(1-|z|^2)^{\alpha-1}\,\log\bigl(\frac {\mathrm{e}}{1-|z|^2}\bigr)}, \end{equation*}

which is the measure $\mu=\mu(\alpha+1,1)$ given in the family of examples of § 5. By a simple computation (see (11)),

\begin{equation*} \mu_{n,k}\simeq \frac{2^{-n(1-\alpha)}}n\qquad n\geq 1,\ k=0, \dots, 2^n-1, \end{equation*}

and therefore, since k ranges over 2n terms,

\begin{align*} \sum_{n,k}\mu_{n,k}^{1/(1-\alpha)}\simeq \sum_{n\geq 1} \frac{1}{n^{1/(1-\alpha)}} \lt +\infty. \end{align*}

On the other hand, letting $Y_{n,k}$ be as in the proof of part (b) (see (8)), we get

\begin{eqnarray*} \mathbb{E}(Y_{n,k})&=&2^{n\alpha} \sum_{m\ge n}2^{-m\alpha}\sum_{j:T_{m,j}\subset Q_{n,k}}\mu_{m,j} \simeq 2^{n\alpha} \sum_{m\ge n}2^{-m\alpha} 2^{m-n}\frac{2^{-(1-\alpha)m}}{m}\\ &=&2^{-(1-\alpha)n}\sum_{m\ge n}\frac{1}{m}=+\infty \end{eqnarray*}

Thus, the expected weight of any single Carleson window $Q_{n,k}$ is infinite and $\Lambda_{\mu}$ cannot be α-Carleson.

(d) One could think of considering a divergent series $\sum_{n,k}\mu_{n,k}^{\gamma}=+\infty$ such that $\sum_{n,k}\mu_{n,k}^{\gamma'} \lt +\infty$ for every $\gamma' \gt \gamma$, and then apply (b), showing that $\Lambda_{\mu}$ is α-Carleson when $\gamma' \lt \frac{1}{1-\alpha}$, i.e. when $\alpha \gt 1-\frac{1}{\gamma'}=\frac{\gamma'-1}{\gamma'}$. However, this does not yield the whole range $\alpha\in (0,1)$ for a fixed measure, as required by the statement.

In order to construct an example working for all $\alpha\in (0,1)$, we pick a measure µ supported in a Stolz angle of vertex 1, i.e. let, for $n\geq 1$,

\begin{equation*} \mu_{n,k}=\left\{\begin{array}{ll} \dfrac{1}{n^{1/\gamma}}&\textrm{if }k=0\\ \quad 0 &\textrm{if } k \gt 1. \end{array}\right. \end{equation*}

(We could equivalently take the measure $\tau(2,1/\gamma)$ given in § 5, Example 3). Then,

(10)\begin{equation} \sum_{n,k}\mu_{n,k}^{\gamma}=\sum_n \frac{1}{n}=\infty \end{equation}

but for every $\gamma' \gt \gamma$,

\begin{equation*} \sum_{n,k}\mu_{n,k}^{\gamma'}=\sum_n \frac{1}{n^{\gamma'/\gamma}} \lt +\infty. \end{equation*}

To prove that $\Lambda_\mu$ is almost surely α-Carleson we will argue as before. Set $Y_{n,k}$ as in the proof of (b) (see (8)) and follow the same steps to prove that

\begin{equation*} \mathbb{P}(Y_{n,k}\ge A)\lesssim B_{n,k}^A, \end{equation*}


\begin{equation*} B_{n,k}=\sum_{m\ge n} \sum_{j:T_{m,j}\subset Q_{n,k}} \mu_{m,j}2^{-(m-n)\alpha}. \end{equation*}

By construction, $B_{n,k}=0$ for all k > 0. On the other hand,

\begin{equation*} B_{n,0}=2^{n\alpha}\sum_{m\ge n}2^{-m\alpha}\mu_{m,0} =2^{n\alpha}\sum_{m\ge n}\frac{2^{-m\alpha}}{m^{1/\gamma}} \le \frac{1}{n^{1/\gamma}}. \end{equation*}

(Observe that this last expression is independent of α.) Hence,

\begin{equation*} \sum_{n,k}B_{n,k}^{\gamma'}=\sum_nB_{n,0}^{\gamma'} \le \sum_n \frac{1}{n^{\gamma'/\gamma}} \lt +\infty, \end{equation*}

and as in the proof of (b) the Borel–Cantelli lemma allows to conclude that Λ is almost surely α-Carleson.

4. Random interpolating sequences

In this section, we discuss several consequences of Theorems 1.2 and  1.5 on random interpolating sequences $\Lambda_\mu$ for various spaces of holomorphic functions in $\mathbb{D}$. The results are rather straightforward consequences of the aforementioned theorems and the known conditions for such sequences.

4.1. Hardy (and Bergman) spaces

In this section, we characterize the measures µ for which the associated Poisson process $\Lambda_\mu$ is almost surely an interpolating sequence for the Hardy spaces.

Recall that a sequence $\Lambda=\{\lambda_n\}_{n\in\mathbb{N}}\subset\mathbb{D}$ is interpolating for

\begin{equation*} H^\infty=\bigl\{f\in H(\mathbb{D}) : \|f\|_\infty=\sup_{z\in D} |f(z)| \lt \infty\bigr\} \end{equation*}

whenever for every sequence of bounded values $\{w_n\}_{n\in\mathbb{N}}\subset\mathbb{C}$ there exists $f\in H^\infty$ such that $f(\lambda_n)=w_n$, $n\in\mathbb{N}$. According to a famous theorem by Carleson, Λ is $H^\infty$-interpolating if and only if it is separated and 1-Carleson [Reference Carleson8]. This characterization extends to all Hardy spaces

\begin{equation*} H^p=\Bigl\{f\in H(\mathbb{D}) : \|f\|_p=\sup_{r \lt 1}\Bigl(\int_0^{2\pi} |f(r\mathrm{e}^{\mathrm{i}t})|^p\, \frac{dt}{2\pi}\Bigr)^{1/p} \lt +\infty\Bigr\}\qquad 0 \lt p \lt \infty, \end{equation*}

for which the interpolation problem is defined in a similar manner (the data wn to be interpolated should satisfy $\sum_n(1-|\lambda_n|^2)|w_n|^p \lt +\infty$, see e.g. [Reference Duren14, Chapter 9]).

The separation condition given in Theorem 1.2 implies immediately that $\Lambda_\mu$ is 1-Carleson, by Theorem 1.5; hence, the following result follows.

Theorem 4.1. Let $\Lambda_\mu$ be the Poisson process associated with a positive, σ-finite, locally finite measure µ. Then, for any $0 \lt p\leq \infty$,

\begin{equation*} \mathbb{P}\bigl(\Lambda_\mu\ {is\,\, {H^{{p}}}-interpolating}\bigr) = \left\{\begin{array}{ll} 1\quad &\textrm{if $\ \int_{\mathbb{D}} F_\mu^2(z)\, d\nu(z) \lt \infty$} \\ 0\quad &\textrm{if $\ \int_{\mathbb{D}} F_\mu^2(z)\, d\nu(z)=\infty$}. \end{array}\right. \end{equation*}

To complete the picture, we discuss zero sequences Λ for Hp, $0 \lt p\leq \infty$. These are deterministically characterized by the Blaschke condition $\sum_{\lambda\in\Lambda}(1-|\lambda|) \lt \infty$. Noticing that $\{\sum_{\lambda\in\Lambda_\mu}(1-|\lambda|) \lt \infty\}$ is a tail event and using Kolmogorov’s 0-1 law we get:

Proposition 4.2. Let $\Lambda_\mu$ be the Poisson process associated with a positive, σ-finite, locally finite measure µ. Then, for any $0 \lt p\leq \infty$,

\begin{equation*} \mathbb{P}\bigl(\Lambda_\mu {\,is\, a\, zero\, set\, for\, {H^{{p}}}}\bigr) = \left\{\begin{array}{ll} 1\quad &\textrm{if $\ \int_{\mathbb{D}} (1-|z|)\, d\mu(z) \lt \infty$} \\ 0\quad &\textrm{if $\ \int_{\mathbb{D}} (1-|z|)\, d\mu(z)=\infty$}. \end{array}\right. \end{equation*}

Observe that

\begin{equation*} \int_{\mathbb{D}} (1-|z|)\,\mathrm{d}\mu(z)=\sum\limits_{n,k} \int_{T_{n,k}} (1-|z|)\,\mathrm{d}\mu(z)\simeq\sum\limits_{n,k}2^{-n}\mu_{n,k} \end{equation*}

and that the condition is just

\begin{equation*} \mathbb{E}\bigl[\sum_{\lambda\in\Lambda_\mu}(1-|\lambda|)\bigr]=\mathbb{E}\bigl[\sum_{n,k}\sum_{\lambda\in T_{n,k}}(1-|\lambda|)\bigr] \simeq \sum\limits_{n,k}2^{-n}\mathbb{E}\bigl[X_{n,k}\bigr] =\sum\limits_{n,k}2^{-n}\mu_{n,k} \lt \infty. \end{equation*}

Observe also that $\sum_{k=0}^{2^n-1}\mu_{n,k}=\mu(A_n)$ for all $n\in\mathbb{N}$; hence,

\begin{equation*} \sum\limits_{n,k}2^{-n}\mu_{n,k}=\sum_n 2^{-n}\mu(A_n). \end{equation*}

Proof of Proposition 4.2

Denote $X_n=N_{A_n}=\sum_{k=0}^{2^n-1} X_{n,k}$ and denote $\mu_n=\mathbb{E}[X_n]=\mu(A_n)$.

Assume first that $\sum_{n} 2^{-n} \mu_n \lt +\infty$. Set $Y=\sum_n 2^{-n} X_n$ and observe that, by the independence of the different Xn,

\begin{align*} \mathbb{E}[Y]&=\sum_n 2^{-n} \mu_n \lt +\infty\qquad && {\rm Var}(Y)=\sum_n 2^{-2n}\mu_n \lt +\infty. \end{align*}

Then, by Markov’s inequality,

\begin{equation*} \mathbb{P}(Y\ge 2\mathbb{E}(Y))\le \frac{1}{2}. \end{equation*}

Since $\{Y=\infty\}$ is a tail event, Kolmogorov’s 0-1 law implies that $\mathbb{P}(Y=+\infty)=0$, and in particular, the Blaschke sum is finite almost surely.

Assume now that $\sum_{n} 2^{-n} \mu_n=+\infty$. Split the sum in two parts:

\begin{equation*} \sum_n 2^{-n}\mu_n=\sum_{n:\mu_n\le 2^n/n^2}2^{-n}\mu_n+ \sum_{n:\mu_n \gt 2^n/n^2}2^{-n}\mu_n. \end{equation*}

It is enough to consider the second sum, since the first one obviously converges. Since ${\rm Var}[X_n]=\mu_n$, Chebyshev’s inequality yields,

\begin{equation*} \mathbb{P}(X_n\le \frac{1}{2}\mu_n) =\mathbb{P}(X_n\le \mu_n-\frac{\mu_n}{2})\leq \mathbb{P}(|X_n- \mu_n|\geq \frac{\mu_n}{2}) \le \frac{4}{\mu_n}. \end{equation*}


\begin{equation*} \sum_{n:\mu_n \gt 2^n/n^2} \mathbb{P}(X_n\le \frac{1}{2}\mu_n) \le \sum_{n:\mu_n \gt 2^n/n^2}\frac{4}{\mu_n}\le \sum_{n:\mu_n \gt 2^n/n^2}\frac{4n^2}{2^n} \lt +\infty. \end{equation*}

Now, by the Borel–Cantelli lemma, $X_n \gt \frac{1}{2}\mu_n$ for all but maybe a finite number of the n with $\mu_n \gt 2^n/n^2$; hence,

\begin{equation*} \sum_{n:\mu_n \gt 2^n/n^2} 2^{-n}X_n \succsim\frac{1}{2}\sum_{n:\mu_n \gt 2^n/n^2}2^{-n}\mu_n, \end{equation*}

which diverges, by hypothesis.

4.1.1. Remark. Interpolation in Bergman spaces

Interpolating sequences Λ for the (weighted) Bergman spaces

\begin{equation*} B_\alpha^p=\Bigl\{f\in H(\mathbb{D}) : \|f\|_{\alpha,p}^p=\int_{\mathbb{D}} |f(z)|^p (1-|z|^2)^{\alpha p-1}\,\mathrm{d}m(z) \lt \infty\Bigr\}, \end{equation*}

with $0 \lt \alpha$, $0 \lt p\leq \infty$ are characterized by the separation together with the upper density condition

\begin{equation*} D_+(\Lambda):=\limsup_{r\to 1^-} \sup_{z\in\mathbb{D}} \frac{\sum\limits_{1/2 \lt \rho(z,\lambda)\leq r}\,\log\frac 1{\rho(z,\lambda)}}{\,\log(\frac 1{1-r})} \lt \alpha \end{equation*}

(see [Reference Seip23] and [Reference Hedenmalm, Korenblum and Zhu17, Chapter 5] for both the definitions and the results).

Since every 1-Carleson sequence has density $D_+(\Lambda)=0$, the same conditions of Theorem 4.1 also characterize a.s. Bergman interpolating sequences, regardless of the indices α and p. Again, because of the big fluctuations of the Poisson process, the conditions required to have separation a.s. are so strong that they can only produce sequences of zero upper density.

Another indication of the big fluctuations of the Poisson process is the following. For the invariant measure $\mathrm{d}\nu(z)=\frac{\mathrm{d}m(z)}{(1-|z|^2)^2}$, which obviously satisfies $\nu_{n,k}\simeq 1$ for all n, k, it is not difficult to see that almost surely,

\begin{equation*} D_+(\Lambda_{\nu})=+\infty\qquad\textrm{and}\quad D_-(\Lambda_{\nu}):=\liminf_{r\to 1^-} \inf_{z\in\mathbb{D}} \frac{\sum\limits_{1/2 \lt \rho(z,\lambda)\leq r}\,\log\frac 1{\rho(z,\lambda)}}{\log(\frac 1{1-r})}=0. \end{equation*}

Therefore, there are way too many points for $\Lambda_\nu$ to be interpolating for any $B_\alpha^p$, but there are too few for it to be sampling, since these sets must have strictly positive lower density $D_-(\Lambda)$ (see [Reference Hedenmalm, Korenblum and Zhu17, Chapter 5]).

4.2. Interpolation in the Bloch space

We consider now interpolation in the Bloch space $\mathcal B$, consisting of functions f holomorphic in $\mathbb{D}$ such that

\begin{equation*} \|f\|_{\mathcal B}:=|f(0)|+\sup_{z\in\mathbb{D}}|f'(z)|(1-|z|^2) \lt +\infty. \end{equation*}

Since Bloch functions satisfy the Lipschitz condition $|f(z)-f(w)|\leq \|f\|_{\mathcal B}\, \delta(z,w)$, where $\delta(z,w)=\frac{1}{2}\,\log\frac{1+\rho(z,w)}{1-\rho(z,w)}$ denotes the hyperbolic distance, Nicolau and Bøe defined interpolating sequences for $\mathcal B$ as those $\Lambda=\{\lambda_n\}_{n\in\mathbb{N}}$ such that for every sequence of values $\{v_n\}_{n\in\mathbb{N}}$ with $\sup\limits_{n\neq m}\frac{|v_n-v_m|}{\delta(\lambda_n,\lambda_m)} \lt \infty$ there exists $f\in\mathcal B$ with $f(\lambda_n)=v_n$, $n\in\mathbb{N}$ [Reference Bøe and Nicolau5].

Theorem ([Reference Bøe and Nicolau5, p. 172], [Reference Seip24, Theorem 7]). A sequence Λ of distinct points in $\mathbb{D}$ is an interpolating sequence for $\mathcal B$ if and only if:

  • (a) Λ can be expressed as the union of at most two separated sequences,

  • (b) for some $0 \lt \gamma \lt 1$ and C > 0,

    \begin{equation*} \#\bigl\{\lambda\in\Lambda : \rho(z,\lambda) \lt r\bigr\}\le \frac{C}{(1-r)^{\gamma}} \end{equation*}

    independently on $z\in\mathbb{D}$.

As explained in [Reference Bøe and Nicolau5], condition (b) can be replaced by:

  • (b)’ for some $0 \lt \gamma \lt 1$ and C > 0, and for all Carleson windows Q(I),

    \begin{equation*} \#\bigl\{\lambda\in Q(I) : 2^{-(l+1)}|I| \lt 1-|\lambda| \lt 2^{-l}|I|\bigr\}\leq C 2^{\gamma l}\ , \qquad l\geq 0. \end{equation*}

In [Reference Seip24, Corollary 2], it is mentioned that it can also be replaced by:

  • (b)” , there exist $0 \lt \gamma \lt 1$ and such that Λ is γ-Carleson.

In view of conditions (a) and (b)”, the following characterization of Poisson processes which are a.s. Bloch interpolating sequences follows from Theorems 1.2 and  1.5(b) (with $\gamma\in (2/3,1)$).

Theorem 4.3. Let $\Lambda_\mu$ be the Poisson process associated with a positive, σ-finite, locally finite measure µ. Then,

\begin{equation*} \mathbb{P}\bigl(\Lambda_\mu\,\, {\mathrm{is\,\, \mathcal B-interpolating}}\bigr) = \left\{\begin{array}{ll} 1\quad &\textrm{if $\ \int_{\mathbb{D}} F_\mu^3(z)\, d\nu(z) \lt \infty$} \\ 0\quad &\textrm{if $\ \int_{\mathbb{D}} F_\mu^3(z)\, d\nu(z)=\infty$}. \end{array}\right. \end{equation*}

Note. In case $\int_{\mathbb{D}} F_\mu^3(z)\,\mathrm{d}\nu(z)$, it is also possible to prove (b)’ directly, with the same methods employed in the proof of Theorem 1.5. It is enough to prove the estimate for dyadic arcs $I_{n,k}$, and for those

\begin{equation*} \#\bigl\{\lambda\in Q(I_{n,k}) : 2^{-(l+1)}|I_{n,k}| \lt 1-|\lambda| \lt 2^{-l}|I_{n,k}|\bigr\}\simeq \sum_{j : T_{n+l,j}\subset Q_{n,k}} X_{n+l, j}. \end{equation*}

In the above, the left-hand side corresponds essentially to the number of points in the layer $Q(I_{n,k})\cap A_{n+l}$. Thus, with $m=n+l$, (b)’ is equivalent to

\begin{equation*} \sup_{n,k}\sup_{m\geq n} 2^{-\gamma(m-n)} \sum_{j : T_{m,j}\subset Q_{n,k}} X_{m,j} \lt +\infty. \end{equation*}


\begin{equation*} Y_{n,k,m}=2^{-\gamma(m-n)} \sum_{j : T_{m,j}\subset Q_{n,k}} X_{m,j}\ ,\qquad \mathbb{E}[Y_{n,k,m}]=2^{-\gamma(m-n)} \sum_{j : T_{m,j}\subset Q_{n,k}} \mu_{m,j} \end{equation*}

and proceeding as in the first part of the proof of Theorem 1.5(a) we get (taking A = 3):

\begin{align*} \sum_{n,k}\sum_{m\geq n} \mathbb{P}\bigl(Y_{n,k,m}\geq 3\bigr)&\lesssim\sum_{n,k}\sum_{m\geq n} \Bigl[2^{-\gamma(m-n)} \sum_{j : T_{m,j}\subset Q_{n,k}} \mu_{m,j}\Bigr]^3\\ &\leq \sum_{n,k}\sum_{m\geq n} 2^{-3\gamma(m-n)} \sum_{j : T_{m,j}\subset Q_{n,k}} \mu_{m,j}^3\ 2^{2(m-n)}\\ &=\sum_{m,j} \mu_{m,j}^3 \sum_{n\leq m} \sum_{k : Q_{n,k} \supseteq T_{m,j}} 2^{-(3\gamma-2)(m-n)}. \end{align*}

For any $\gamma \gt 2/3$, this sum is bounded by $\sum_{m,j} \mu_{m,j}^3$, so we can conclude with the Borel–Cantelli lemma.

4.3. Interpolation in Dirichlet spaces

Our last set of results concerns interpolation in the Dirichlet spaces,

\begin{equation*} \mathcal D_\alpha=\bigl\{f\in H(\mathbb{D}) : \|f\|_{\mathcal D_\alpha}^2=|f(0)|^2+\int_{\mathbb D} |f'(z)|^2 (1-|z|^2)^{\alpha}\,\mathrm{d}m(z) \lt \infty\bigr\}, \end{equation*}

with $\alpha\in(0,1)$. The limiting case α = 1 can be identified with the Hardy space H 2.

In these spaces, interpolating sequences are characterized by the separation and a Carleson type condition. This was initially considered by W.S. Cohn, see [Reference Cohn11]; we refer also to the general result [Reference Aleman, Hartz, McCarthy and Richter1]. While separation is a simple condition, that in our random setting is completely characterized by Theorem 1.2, the characterization of Carleson measures in these spaces is much more delicate. This was achieved by Stegenga using the so-called α-capacity [Reference Stegenga25]. In our setting, it is, however, possible to use an easier sufficient one-box condition that can be found in K. Seip’s book, see [Reference Seip24, Theorem 4, p. 38], which we recall here for the reader’s convenience.

Theorem 4.4 (Seip)

A separated sequence Λ in $\mathbb{D}$ is interpolating for $\mathcal D_{\alpha}$, $0 \lt \alpha \lt 1$ if there exist $0 \lt \alpha' \lt \alpha$ such that Λ is $\alpha'$-Carleson.

The reader should be alerted that in Seip’s book the space $\mathcal D_{\alpha}$ is defined in a slightly different way and that the above statement is adapted to our definition.

For these spaces, Theorems 1.2 and 1.5 lead to less precise conclusions. Indeed, in view of Theorem 1.5(c),(d), we cannot hope for complete characterizations if we do not impose additional conditions on the measure µ .

Theorem 4.5. Let $\Lambda_\mu$ be the Poisson process associated with a positive, σ-finite, locally finite measure µ.

  • (a) If $1/2 \lt \alpha \lt 1$, then

    \begin{equation*} P\bigl(\Lambda_\mu \text{ is interpolating for $\mathcal D_\alpha$}\bigr)= \left\{\begin{array}{ll} 1\quad \textrm{if $\ \int_{\mathbb{D}} F_\mu^2(z)\, d\nu(z) \lt +\infty$}\\ 0\quad \textrm{if $\ \int_{\mathbb{D}} F_\mu^2(z)\, d\nu(z)=+\infty$}. \end{array}\right. \end{equation*}
  • (b) If $0\le \alpha \lt 1/2$ and there exists $1 \lt \gamma \lt \frac{1}{1-\alpha}$ such that $\int_{\mathbb{D}} F_\mu^\gamma(z)\, d\nu(z) \lt +\infty$, then

    \begin{equation*} P\bigl(\Lambda_\mu \text{ is interpolating for $\mathcal D_\alpha$}\bigr)=1. \end{equation*}

Clearly, the condition $\int_{\mathbb{D}} F_\mu^2(z)\,\mathrm{d}\nu(z) \lt +\infty$ is also necessary in the case (b) (if the integral diverges, then $\Lambda_{\mu}$ is almost surely not separated).

Proof. (a) If $\sum_{n,k}\mu_{n,k}^2=+\infty$, then $\Lambda_{\mu}$ is almost surely not separated by Theorem 1.2; hence, it is almost surely not interpolating.

If $\sum_{n,k}\mu_{n,k}^2 \lt +\infty$, Theorem 1.2 shows again that the sequence $\Lambda_{\mu}$ is almost surely separated. By Seip’s theorem, it remains to show that $\Lambda_{\mu}$ is almost surely $\alpha'$-Carleson for some $\alpha' \lt \alpha$. Pick $1/2 \lt \alpha' \lt \alpha \lt 1$, so that $1/(1-\alpha') \gt 2$. Choosing $\gamma\in (2,1/(1-\alpha'))$, we get

\begin{equation*} \sum_{n,k}\mu_{n,k}^{\gamma}\lesssim \sum_{n,k}\mu_{n,k}^2 \lt +\infty, \end{equation*}

and by Theorem 1.5(b), we conclude that $\Lambda_{\mu}$ is almost surely $\alpha'$-Carleson.

(b) If $\alpha \lt 1/2$, then $1/(1-\alpha) \lt 2$ and the value γ given by the hypothesis satisfies $1 \lt \gamma \lt 2$. Therefore,

\begin{equation*} \sum_{n,k}\mu_{n,k}^2\lesssim \sum_{n,k}\mu_{n,k}^{\gamma} \lt +\infty, \end{equation*}

which allows to deduce from Theorem 1.2 that $\Lambda_{\mu}$ is almost surely separated.

Since the inequality $\gamma \lt 1/(1-\alpha)$ is strict, we also have $\gamma \lt 1/(1-\alpha')$ for some $\alpha' \lt \alpha$ sufficiently close to α. Again, Theorem 1.5(b) shows that $\Lambda_{\mu}$ is almost surely $\alpha'$-Carleson, and Seip’s theorem implies that $\Lambda_{\mu}$ is almost surely interpolating.

4.4. Additional remarks and comments

The above results show several applications of our Theorems 1.2 and 1.5, but they also give rise to many challenging questions. Is it possible to get a necessary counterpart of Theorem 1.5(b) under reasonable conditions on µ (more general than the class considered in § 5 below)? Is it possible to get precise statements when $\alpha=1/2$? Also, the case of the classical Dirichlet space seems to be largely unexplored for Poisson point processes, while the situation regarding interpolation, separation and zero-sets for the radial probabilistic model is completely known for all $\alpha\in [0,1]$ (see [Reference Bogdan6, Reference Chalmoukis, Hartmann, Kellay and Wick9]).

5. Examples

In this final section, we illustrate the above results with three simple families of measures on $\mathbb{D}$. In the second part, we briefly discuss alternative, non-discrete, formulations of the conditions given in the previous statements.

5.1. Radial measures


\begin{equation*} \mathrm{d}\mu(a,b)(z)= \frac{\mathrm{d}m(z)}{(1-|z|^2)^{a} \,\log^b\bigl(\frac{\mathrm{e}}{1-|z|^2}\bigr)}= \frac{\mathrm{d}\nu(z)}{(1-|z|^2)^{a-2} \,\log^b\bigl(\frac{\mathrm{e}}{1-|z|^2}\bigr)}, \end{equation*}

where either a > 1, $b\in\mathbb{R}$, or a = 1 and $b\leq 1$ (so that $\mu(a,b)(\mathbb{D})=+\infty$).

Observe that

\begin{equation*} \mu(a,b)_{n,k}\simeq \frac{2^{-n(2-a)}}{n^b}\qquad n\geq 1,\ k=0, \dots, 2^n-1, \end{equation*}

and therefore, for γ > 0,

(11)\begin{equation} \sum_{n,k} \mu(a,b)_{n,k}^\gamma\simeq \sum_n 2^n \frac{2^{-n(2-a)\gamma}}{n^{b\gamma}}= \sum_n\frac{2^{-n[(2-a)\gamma-1]}}{n^{b\gamma}}. \end{equation}

Proposition 5.1. Consider the Poisson process $\Lambda_{a,b}$ associated with the measure $\mu(a,b)$, with either a > 1 or a = 1 and $b\leq 1$.

  • (a) $\Lambda_{a,b}$ can a.s. be expressed as a union of M separated sequences if and only if either $a \lt 2-\frac 1{M+1}$ and $b\in\mathbb{R}$ or $a=2-\frac 1{M+1}$ and $b \gt \frac 1{M+1}$.

  • (b) In particular, $\Lambda_{a,b}$ is a.s. separated if and only if either $a \lt 3/2$ and $b\in\mathbb{R}$ or $a=3/2$ and $b \gt 1/2$.

  • (c) $\Lambda_{a,b}$ is a.s. a 1-Carleson sequence if and only if a < 2, $b\in\mathbb{R}$.

  • (d) Let $\alpha\in (0,1)$. Then $\Lambda_{a,b}$ is a.s. an α-Carleson sequence if and only if $a \lt 1+\alpha$ or $a=1+\alpha$ and b > 1.

Proof. (a) is immediate from Theorem 1.2 and (11) with $\gamma=M+1$ and the usual equivalence of Proposition 2.1.

(b) If $a\geq 2$ the series in (11) diverges for all γ > 1, thus by Theorem 1.5(a) $\Lambda_{a,b}$ is a.s. not 1-Carleson.

On the other hand, if a < 2, there exists γ such that $(2-a)\gamma-1 \gt 0$ (i.e, such that $\gamma \gt \frac 1{2-a})$. For that γ, the series in (11) converges, and we can conclude again by Theorem 1.5(a).

(c) Suppose first that $a \lt 1+\alpha$. As in the previous case, since $2-a \gt 1-\alpha$, there exists $\gamma\in(\frac 1{2-a},\frac 1{1-\alpha})$. For this γ, the series in (11) converges and we can apply Theorem 1.5(b).

If $a \gt 1+\alpha$ and $b\in\mathbb{R}$, then $\Lambda_{\mu(a,b)}$ contains in the mean more points than $\Lambda_{\mu(1+\alpha,1)}$ for which we have shown in Theorem 1.5(c) that it is almost surely not α-Carleson.

It thus remains the case $a=1+\alpha$. Again, when b = 1 – and thus also when b < 1 since then we have more points in the mean – the proof of Theorem 1.5(c) shows that the corresponding sequence is almost surely not α-Carleson.

Finally, suppose that $a=1+\alpha$ and b > 1. Recall from (8), the notation

\begin{equation*} \quad Y_{n,k}=2^{n\alpha}\sum_{m\geq n} 2^{-m\alpha} \sum_{j: T_{m,j}\subset Q_{n,k}} X_{m,j}. \end{equation*}

In the proof of Theorem 1.5(b), we have shown that

\begin{equation*} P(Y_{n,k}\ge A)\le B_{n,k}^A, \end{equation*}


\begin{equation*} B_{n,k}=\sum_{m\geq n} 2^{-(m-n)\alpha} \sum_{j: T_{m,j}\subset Q_{n,k}} \mu_{m,j}. \end{equation*}

From the explicit form of $\mu_{m,j}$, we get

\begin{equation*} B_{n,k}\simeq \sum_{m \ge n}2^{-(m-n)\alpha}\times 2^{m-n}\times \frac{2^{-m(2-a)}}{m^b} =2^{n(\alpha-1)}\sum_{m\ge n}\frac{2^{-m(\alpha+1-a)}}{m^b}, \end{equation*}

which converges exactly when $a \lt 1+\alpha$ or when $a=1+\alpha$ and b > 1, which is the case we are interested in here. In this situation, we get

\begin{equation*} B_{n,k}\simeq \frac{2^{-n(2-a)}}{n^b}. \end{equation*}

Clearly, when $A\ge 1/(2-a)=1/(1-\alpha)$, then $\sum_{n,k}B_{n,k}^A$ converges, and the Borel–Cantelli lemma shows that $Y_{n,k}\ge A$ can happen for an at most finite number of Carleson windows $Q_{n,k}$. Hence, $\Lambda_{\mu(a,b)}$ is a.s. α-Carleson.

5.2. Measures with a singularity on $\mathbb{T}$

Define now

\begin{equation*} \mathrm{d}\sigma(a,b)(z)= \frac{\mathrm{d}m(z)}{|1-z|^{a}\,\log^b\bigl(\frac{\mathrm{e}}{|1-z|}\bigr)}, \end{equation*}

where either a > 2, $b\in\mathbb{R}$, or a = 2 and $b\leq 1$ (so that $\sigma(a,b)(\mathbb{D})=+\infty$). Here,

\begin{align*} \sigma (a,b)_{n,k}=\sigma (a,b)(T_{n,k})\simeq \frac{2^{-2n}}{[(k+1)2^{-n}]^a\,\log^b\bigl(\frac{\mathrm{e}}{(k+1)2^{-n}}\bigr)},\quad n\in\mathbb{N},\ k=0,\ldots, 2^{n-1}. \end{align*}

Hence, for γ > 1,

(12)\begin{equation} \sum_{n,k} \sigma (a,b)_{n,k}^\gamma\simeq \sum_{n} {2^{-n\gamma(2-a)}} \sum_{k=1}^{2^n}\frac 1{\Big(k^a\,\log^b\bigl(\frac {\mathrm{e}}{k2^{-n}}\bigr)\Big)^{\gamma}}. \end{equation}

Let us examine the growth of the sum in k. For that, set

\begin{equation*} S_n(a,b,\gamma)=\sum_{k=1}^{2^n}\frac 1{k^{a\gamma}\,\log^{b\gamma}\bigl(\frac {\mathrm{e}}{k2^{-n}}\bigr)} \simeq \int_1^{2^n}\frac{\mathrm{d}x}{x^{a\gamma}\,\log^{b\gamma}\bigl(\frac {\mathrm{e}}{x2^{-n}}\bigr)}. \end{equation*}

The change of variable $t=\log\bigl(\frac {\mathrm{e}}{x2^{-n}}\bigr)$ leads to

\begin{equation*} S_n(a,b,\gamma)\simeq \int_{\log(2^n\mathrm{e})}^1 \left(\frac{\mathrm{e}^t}{\mathrm{e}2^n}\right)^{a\gamma-1}\frac{-\mathrm{d}t}{t^{b\gamma}} =\frac{2^{-n(a\gamma-1)}}{\mathrm{e}^{a\gamma -1}}\int_{1}^{\log(2^ne)}\mathrm{e}^{t(a\gamma-1)}\frac{\mathrm{d}t}{t^{b\gamma}}. \end{equation*}

Our standing assumption being a > 2 or a = 2 and $b\le 1$, we only need to consider these two cases. In both cases, $\mathrm{e}^{t(a\gamma-1)}/t^{b\gamma}\to +\infty$ as $t\to+\infty$, and the last integral behaves essentially as the value in the upper bound of the integration interval

\begin{equation*} \int_{1}^{\log(2^n\mathrm{e})}\mathrm{e}^{t(a\gamma-1)}\frac{\mathrm{d}t}{t^{b\gamma}} \simeq \frac{2^{n(a\gamma-1)}}{n^{b\gamma}}. \end{equation*}


\begin{equation*} S_n(a,b,\gamma)\simeq \frac{1}{n^{b\gamma}}, \end{equation*}


(13)\begin{equation} \sum_{n,k} \sigma (a,b)_{n,k}^\gamma\simeq \sum_n 2^{-n\gamma(2-a)}\times \frac{1}{n^{\gamma b}}=\sum_n \frac{2^{-n\gamma(2-a)}}{n^{\gamma b}}. \end{equation}

We are now in a position to prove the following result.

Proposition 5.2. Consider the Poisson process $\tilde \Lambda_{a,b}$ associated with the measure $\sigma(a,b)$ with either a > 2 or a = 2 and $b\leq 1$.

  • (a) For a > 2, the process $\tilde \Lambda_{a,b}$ is a.s. neither a finite union of separated sequences nor an α-Carleson, for any $\alpha\in (0,1]$.

  • (b) For a = 2, the process $\tilde \Lambda_{2,b}$ is

    • (i) the union of M separated sequences if and only if $b \gt \frac 1{M+1}$,

    • (ii) α-Carleson for $\alpha\in (0,1)$ if $b \gt 1-\alpha$.

Proof. (a) is immediate from Theorems 1.2 and  1.5, since (13) diverges for all γ > 0.

(b) In this case, the series (13) is just $\sum_n 1/n^{b\gamma}$.

The case (i) follows from Theorem 1.2 with $\gamma=M+1$.

For (ii), by the hypothesis $1/b \lt 1/(1-\alpha)$, there exists $1/b \lt \gamma \lt 1/(1-\alpha)$, for which the series (13) converges. We can conclude by Theorem 1.5.

5.3. Measures in a cone

Given a point $\zeta\in\mathbb{T}$, consider the Stolz region

\begin{equation*} \Gamma(\zeta)=\bigl\{z\in \mathbb{D} :\frac{|\zeta-z|}{1-|z|} \lt 2\bigr\}. \end{equation*}

We discuss the previous measures restricted to $\Gamma(\zeta)$. With no loss of generality, we can assume that ζ = 1. Let thus

\begin{equation*} \mathrm{d}\tau(a,b)(z)= \chi_{\Gamma(1)}(z)\,\mathrm{d}\mu(a,b)(z)=\chi_{\Gamma(1)}(z)\frac{\mathrm{d}m(z)}{(1-|z|^2)^{a}\,\log^b\bigl(\frac {\mathrm{e}}{1-|z|^2}\bigr)}, \end{equation*}

where now either a > 2 and $b\in\mathbb{R}$ or a = 2 and $b\leq 1$ (so that $\nu(a,b)(\mathbb{D})=+\infty$). Since, in $\Gamma(1)$, the measures $\mathrm{d}\mu(a,b)$ and $\mathrm{d}\sigma(a,b)$ behave similarly, we could replace $\mathrm{d}\mu(a,b)$ by $\mathrm{d}\sigma(a,b)$ in the definition of $d\tau(a,b)$ and obtain the same results.

Observe that $\nu(a,b)_{n,k}$ is non-zero only for a finite number N of k at each level n, and that for those k

\begin{equation*} \tau(a,b)_{n,k}\simeq \frac{2^{-n(2-a)}}{n^b}\qquad n\geq 1,\ k=0, \dots, N. \end{equation*}


(14)\begin{equation} \sum_{n,k} \tau(a,b)_{n,k}^\gamma\simeq \sum_n \frac{2^{-n(2-a)\gamma}}{n^{b\gamma}}, \end{equation}

which is exactly the same estimate as in (13) and thus immediately leads to the same result as Proposition 5.2. This might look surprising since $\sigma(a,b)$ (and a fortiori $\mu(a,b)$) puts infinite mass outside $\Gamma(\zeta)$ (actually outside Stolz angles at ζ with arbitrary opening).

Proposition 5.3. Consider the Poisson process $\hat \Lambda_{a,b}$ associated with the measure $\tau(a,b)$, with either a > 2 or a = 2 and $b\leq 1$.

  • (a) For a > 2, the process $\hat \Lambda_{a,b}$ is a.s. not a finite union of separated sequences separated or α-Carleson for any $\alpha\in (0,1]$.

  • (b) For a = 2, the process $\hat \Lambda_{2,b}$ is

    1. (1) the union of M separated sequences if and only if $b \gt \frac 1{M+1}$,

    2. (2) α-Carleson for $\alpha\in (0,1)$ if $b \gt 1-\alpha$.


The first author was partially supported by the project REPKA (ANR-18-CE40-0035). The second author was partially supported by the Generalitat de Catalunya (grant 2021 SGR 00087) and the Spanish Ministerio de Ciencia e Innovación (project PID2021-123405NB-I00).


Aleman, A., Hartz, M., McCarthy, J. and Richter, S., Interpolating sequences in spaces with the complete pick property. Int. Math. Res. Not. (2019), no. 12, 38323854.CrossRefGoogle Scholar
Aparicio Monforte, A., Successions aleatòries a $\mathbb B_n\subset\mathbb C^n$. Master’s thesis. Universitat de Barcelona (2005).Google Scholar
Amar, É., Bonami, A., Mesures de Carleson d’ordre α et solutions au bord de l’équation $\bar\partial_b$. Bull. Soc. Math. France 107 (1979), no. 1, 2348.CrossRefGoogle Scholar
Billingsley, P., Probability and measure 94. Wiley, New York, 1979.Google Scholar
Bøe, B. and Nicolau, A., Interpolation by functions in the Bloch space. J. Anal. Math. 94 (2004), 171194.CrossRefGoogle Scholar
Bogdan, K., On the zeros of functions with finite Dirichlet integral. Kodai Math. J. 19 (1996), no. 1, 716.CrossRefGoogle Scholar
Bomash, G., A Blaschke-type product and random zero sets for Bergman spaces. Ark. Mat. 30 (1992), no. 1, 4560.CrossRefGoogle Scholar
Carleson, L., An interpolation problem for bounded analytic functions. Amer. J. Math., 80 (1958), 921930.CrossRefGoogle Scholar
Chalmoukis, N., Hartmann, A., Kellay, K. and Wick, B. D., Random interpolating sequences in Dirichlet spaces. Int. Math. Res. Not. 17 (2022), 1362913658. doi:10.1093/imrn/rnab110CrossRefGoogle Scholar
Cochran, W. G., Random Blaschke products. Trans. Amer. Math. Soc. 322 (1990), no. 2, 731755.CrossRefGoogle Scholar
Cohn, W. S., Interpolation and multipliers on Besov and Sobolev spaces. Complex Variables Theory Appl. 22 (1993), no. 1–2, 3545. doi:10.1080/17476939308814644Google Scholar
Daley, D. J., Vere-Jones, D., An introduction to the theory of point processes. Vol. I. Elementary theory and methods. Second edition. Probability and its Applications. Springer–Verlag, New York, 2003.Google Scholar
Dayan, A., Wick, B. D. and Wu, Sh., Random interpolating sequences in the polydisc and the unit ball Comput. Methods Funct. Theory 23 (2023), no. 1, 165198.CrossRefGoogle Scholar
Duren, P. L., Theory of p spaces. Pure and Applied Mathematics, 38, Academic Press, New York–London, 1970.Google Scholar
El Fallah, O., Kellay, K., Mashreghi, J. and Ransford, T., One-box conditions for Carleson measures for the Dirichlet space. Proc. Amer. Math. Soc. 143 (2015), no. 2, 679684.CrossRefGoogle Scholar
Garnett, J. B., Bounded analytic functions. Pure and Applied Mathematics, 96, Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York-London, 1981, .Google Scholar
Hedenmalm, H., Korenblum, B., Zhu, K., Theory of Bergman spaces. Graduate Texts in Mathematics, 199, Springer-Verlag, New York, 2000.CrossRefGoogle Scholar
Hough, J. B., Krishnapur, M., Peres, Y., Virág, B., Zeros of Gaussian analytic functions and determinantal point processes. University Lecture Series, 51, American Mathematical Society, Providence, RI, 2009.Google Scholar
Last, G., Penrose, M., Lectures on the Poisson process. Institute of Mathematical Statistics Textbooks, 7, Cambridge University Press, Cambridge, 2018.Google Scholar
McDonald, G. and Sundberg, C., Toeplitz operators on the disc. Indiana Univ. Math. J. 28 (1979), no. 4, 595611.CrossRefGoogle Scholar
Rudin, W., Function theory in the unit ball of $\mathbb{C}^n$. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 241. Springer-Verlag, New York-Berlin, 1980, .Google Scholar
Rudowicz, R., Random sequences interpolating with probability one. Bull. London Math. Soc. 26 (1994), no. 2, 160164.CrossRefGoogle Scholar
Seip, K., Beurling type density theorems in the unit disk. Invent. Math. 113 (1993), no. 1, 2139.CrossRefGoogle Scholar
Seip, K., Interpolation and sampling in spaces of analytic functions. University Lecture Series, 33, American Mathematical Society, 2004.Google Scholar
Stegenga, D., Multipliers of the Dirichlet space. Illinois J. Math. 24 (1980), no. 1, 113139.CrossRefGoogle Scholar
Wikipedia contributors, Poisson point process, Wikipedia, The Free Encyclopedia, 19 April 2022 19:38 UTC, [accessed 19 May 2022].Google Scholar
Figure 0

Figure 1. Carleson window $Q(I_{n,k})$ associated with the dyadic interval $I_{n,k}$ and its top half $T_{n,k}$.

Figure 1

Figure 2. Dyadic partitions: $\{T_{n,k}\}_{n,k}$ in blue, $\{\tilde T_{n,k}\}_{n,k}$ in red.