1 Introduction
In this paper, we study the structure of ‘branch points’ in the free boundary of minimizers of Alt–Caffarelli–Friedmantype functionals (see equation (1.1) below). In particular, we show the existence of minimizers to the twophase functional whose zero set contains an open subset (of positive measure) which stays far away from the fixed boundary of the domain. Relatedly, the free boundary of this minimizer also contains branch or cusp points (c.f. equation (1.2)). We also show, in contrast with recent results for critical points to equation (1.1) in [Reference DePhilippis, Spolaor and Velichkov19], that the set of branch points in the free boundary of almostminimizers to equation (1.1) can have fractal like structure.
Alt, Caffarelli and Friedman, in [Reference Alt, Caffarelli and Friedman2], gave the first rigorous mathematical treatment of the twophase energy
where $\Omega \subset {\mathbb {R}}^n$ is a domain with locally Lipschitz boundary, and $\lambda _{\pm }> 0$ Footnote ^{1}.
This is a twophase analogue of the onephase free boundary problem (also called the Bernoulli problem) studied in [Reference Alt and Caffarelli1], first introduced to model the flow of two liquids in jets and cavities but later found to have applications to a variety of problems including eigenvalue optimization, c.f. [Reference De Philippis, Spolaor and Velichkov20, Corollary 1.3].
We say that u is a minimizer of J in $\Omega $ if $J_D(u) \leq J_D(v)$ for all open D with $\overline {D} \subset \Omega $ and all $v\in W^{1,2}(\Omega )$ with $u = v$ in $\Omega \backslash \overline {D}$ . Alternatively, given some subset $S\subset \partial \Omega $ and continuous data $\varphi \in C(S)$ we say that u minimizes $J_\Omega $ for the data $\varphi $ if $u = \varphi $ on S and for any $v\in W^{1,2}(\Omega )$ with $v = \varphi $ on S we have $J_\Omega (u) \leq J_\Omega (v)$ . If the data, $\varphi $ , is not important, we simply say that u is a minimizer of $J_\Omega $ . We note that if u minimizes $J_\Omega $ for some data $\varphi $ , then u is a local minimizer of J in $\Omega $ .
Given a minimizer, u, of particular interest are the free boundaries, $\Gamma ^\pm (u) = \partial \{\pm u> 0\}$ . When $\Gamma ^+\cap \Gamma ^ = \emptyset $ , each of $\Gamma ^{\pm }$ is the free boundary of a minimizer to an associated onephase problem and thus have well understood regularity (c.f. [Reference Alt and Caffarelli1]). On the other hand, when $\Gamma ^+ = \Gamma ^$ , the free boundary regularity is also well understood, first when $n=2$ in [Reference Alt, Caffarelli and Friedman2] and later by Caffarelli ([Reference Caffarelli4, Reference Caffarelli6, Reference Caffarelli5]; see also the book [Reference Caffarelli and Salsa3]) and De SilvaFerrariSalsa (see, e.g., [Reference De Silva, Ferrari and Salsa21, Reference De Silva, Ferrari and Salsa22] and the recent survey article [Reference De Silva, Ferrari and Salsa23]). Until recently, the only missing piece of the picture was the behavior of $\Gamma ^{\pm }$ in neighborhoods where the two sets are not disjoint but also not identical. To be more precise, define the points in the intersection of $\Gamma ^{\pm }$ as twophase points; $\Gamma _{\mathrm {TP}}(u) := \Gamma ^+ \cap \Gamma ^$ . Points which are in one of $\Gamma ^{\pm }$ but not both are onephase points; $\Gamma _{\mathrm {OP}}(u) := \Gamma ^+\cup \Gamma ^ \setminus (\Gamma ^+\cap \Gamma ^)$ . It was a long open question how the free boundary behaved around branch points, that is, points around which the free boundary contains both onephase points and twophase points at every scale;
This open question was finally resolved in the recent work of De Philippis, Spolaor and Velichkov [Reference De Philippis, Spolaor and Velichkov20] (see also [Reference Spolaor and Velichkov28] when $n=2$ ):
Theorem 1.1. (Main Theorem in [Reference De Philippis, Spolaor and Velichkov20])
Let u be a minimizer to the energy in equation (1.1) in $\Omega $ with $\lambda _{\pm }> 0$ . Then for every $x_0 \in \Gamma ^+ \cap \Gamma ^\cap \Omega $ there exists an $r_0> 0$ such that both $\Gamma ^+\cap B(x_0, r_0)$ and $\Gamma ^\cap B(x_0, r_0)$ are $C^{1,1/2}$ graphs.
We note that Theorem 1.1 is most interesting around branch points, that is, $x_0 \in \Gamma _{\mathrm {BP}(u)}\cap \Omega $ . However, left open in [Reference De Philippis, Spolaor and Velichkov20] is whether branch points actually exist in the strict interior of a domain or, more precisely, does there exist a minimizer u in $\Omega $ such that $\Gamma _{\mathrm {BP}}(u)\cap \Omega \neq \emptyset $ . Here, we resolve that open question when $\lambda _+ = \lambda _ = 1$ .
Theorem 1.2. (Main Theorem)
There exists a domain $\Omega \subset \mathbb R^2$ and a minimizer u to $J_\Omega $ with $\lambda _+ = \lambda _ = 1$ such that $\Gamma _{\mathrm {BP}}(u)\cap \Omega \neq \emptyset $ . Even stronger, there exists a ‘pool’ of zeroes: a (nonempty) connected component $\mathcal O$ of $\{u = 0\}$ such that $\overline {\mathcal O} \subset \subset \Omega $ and $\partial \mathcal O \cap \Gamma ^\pm \neq \emptyset $ .
Remark 1.3. After this preprint was posted on arXiv, the authors were informed by H. Shahgholian of another method that could be used to produce branch points ‘topologically’. We thank him for his interest and for explaining this to us. Take a square with zero boundary values on the top and bottom and boundary values $+a$ on the left side and $a$ on the right side. Let u minimize J (with $\lambda _+ = \lambda _$ ) inside of the square, Q, with these boundary values. If $a$ is large enough, then it is not hard to show that $\Gamma _{\mathrm {TP}}(u) \neq \emptyset $ . If $\Gamma _{\mathrm {TP}}(u)\cap \partial Q =\emptyset $ , then there must be a branch point.
On the other hand, if $\Gamma _{\mathrm {TP}}(u)$ touches the fixed boundary, then it should do so nontangentially and in the interior of the top or bottom edge. This would contradict a version of the main theorem in [Reference Karakhanyan, Kenig and Shahgholian26], for $\lambda _+ = \lambda _$ (the theorem in [Reference Karakhanyan, Kenig and Shahgholian26] assumes that $\lambda _+ \neq \lambda _$ ).
It would be interesting to formalize this construction, especially as it may be able to produce branch points which cannot be perturbed away under small deformations of the boundary values or functional. On the other hand, this construction cannot be easily modified to produce ‘pools’ of zeroes. In particular, the branch points in this example are somehow ‘forced’ by the topology of the boundary data (i.e., the presence of a relatively open set of zeros), in contrast to the construction in our Theorem 1.2.
Our tools are reminiscent of our previous studies on almostminimizers with free boundary, c.f. [Reference David and Toro10, Reference David, Engelstein and Toro13, Reference David, Engelstein, Garcia and Toro12]. In particular, we carefully choose competitor functions and use ideas from harmonic analysis and geometric measure theory. We further remark that our construction can be extended to produce examples in dimensions $n\geq 3$ ; see Remark 4.5.
1.1 Comparison with other work on branch points
While we believe the question of whether branch points (or pools) in the free boundary of minimizers of equation (1.1) exist has been open until now, there has been substantial work on branch points for other related functionals and for ‘critical points’ of the functional (1.1).
In particular, branch points in the free boundaries of minimizers to a related vectorial problem were constructed by Spolaor and Velichkov in [Reference Spolaor and Velichkov28]. Additionally, a related phenomena, when the free boundary of minimizers to a onephase version of equation (1.1) comes into contact with the fixed boundary (i.e., $\partial \Omega $ ) resulting in branching like behavior, is well studied (e.g., [Reference ChangLara and Savin8, Reference DePhilippis, Spolaor and Velichkov19]).
The only other work we are aware of regarding branch points in the free boundary of functions associated to the energy (1.1), is the recent preprint [Reference DePhilippis, Spolaor and Velichkov19]. In this very nice work, the authors (amongst other things) construct an infinite family of critical points to the functional (1.1) when $n = 2$ using (quasi)conformal mappings (see [Reference DePhilippis, Spolaor and Velichkov19, Theorem 1.8]).Footnote ^{2} Without being precise, we recall that critical points to equation (1.1) satisfy the associated Euler–Lagrange equations but do not necessarily (locally) minimize the functional in any domain (e.g., $u(x) = x$ is a critical point of $J_{\Omega }$ with $\lambda _+ = 1$ but not a (local) minimizer).
Remark 1.4. To be explicit, we note that (none of) the results of [Reference DePhilippis, Spolaor and Velichkov19] either imply or are implied by our results here. In particular, our main theorem does not analyze the rate at which $\Gamma ^+$ and $\Gamma ^$ come together at the cusp points and thus does not produce examples with different rates. On the other hand, it is not clear whether the examples produced in [Reference DePhilippis, Spolaor and Velichkov19] are minimizers.
Furthermore, the methods of proof are very different, in so far as [Reference DePhilippis, Spolaor and Velichkov19] draws an interesting connection with minimizers of a nonlinear obstacle type problem and uses (quasi)conformal maps in their construction. We construct the relevant boundary values and domains explicitly but do not have a closed formula for our minimizer. Rather, we use tools from harmonic analysis and geometric measure theory to constrain the behavior of the minimizer. In particular, our methods extend to producing examples in dimension $n> 2$ ; see Remark 4.5, which presumably are out of reach of (quasi)conformal methods.
We are not aware of any prior work on ‘pools’ in the zero set of minimizers to equation (1.1). We will limit ourselves to pointing out that it is easy to construct examples of minimizers whose zero set has nonempty interior but that some care is required to constrain this open component to the strict interior of the domain.
1.2 Accumulation of branch points
Also of interest in [Reference DePhilippis, Spolaor and Velichkov19] is the fact that for certain, symmetric (in a precise sense), critical points of equation (1.1) in two dimensions, the branch points in the free boundary are locally isolated in (c.f. [19, Theorem 1.6(a)]). In fact, in analogy with areaminimizing surfaces (see, e.g., [Reference Chang7, Reference DeLellis, Spadaro and Spolaor16, Reference DeLellis17, Reference DeLellis18, Reference DeLellis, Marchese, Spadaro and Valtorta15]) one might conjecture the following:
Conjecture 1.5. Let u be a minimizer to equation (1.1) in some $\Omega \subset \mathbb R^n$ . Then, for any $D\subset \subset \Omega $ the set $D\cap \Gamma _{\mathrm {BP}}(u)$ is locally contained in finitely many Lipschitz $(n2)$ dimensional submanifolds.
In the second part of this paper, we show that such a theorem fails for almostminimizers of equation (1.1). Recall that almostminimizers minimize the energy (1.1) up to some noise.
Definition 1.6. We say that u is an almostminimizer to equation (1.1) in $\Omega \subset \mathbb R^n$ if there is a $C> 0$ and an $\alpha \in (0, 1]$ such that for every ball, B, with radius, $r(B)> 0$ , and $\overline {B} \subset \Omega $ and for every $v\in W^{1,2}(\Omega )$ with $v = u$ in $\Omega \setminus \overline {B}$ we have
Almostminimizers arise naturally in constrained optimization (and thus in eigenvalueoptimization problems; see, e.g., [Reference Mazzoleni, Terracini and Velichkov27]). Almostminimizers may not satisfy a Euler–Lagrange equation, but the work of the authors in [Reference David and Toro10, Reference David, Engelstein and Toro13, Reference David, Engelstein, Garcia and Toro12] show that the ‘firstorder’ regularity of almostminimizers mimics that of minimizers; in particular, almostminimizers are regular, Lipschitz continuous, nondegenerate, $C^{1,\beta }$ up to their free boundary and, in the onephase case, have free boundaries which are smooth up to a set of $\mathcal H^{n1}$ measure zero.
However, in contrast to [Reference DePhilippis, Spolaor and Velichkov19, Theorem 1.6(a)] and Conjecture 1.5, in this paper, we prove that the set of branch points for almostminimizers can be essentially arbitrary.
Theorem 1.7. (Corollary to Theorem 5.1)
Let $E \subset \mathbb R^{n1}$ be a compact set with no interior point. Embed E into $\mathbb R^n$ so that $E \subset \{(x', 0)\mid x' \in \mathbb R^{n1}\}$ and let $R> 0$ be so large that $E \subset B(0, R/10)$ . Then there exists an almostminimizer, u, to $J_{B(0, R)}$ with $\lambda _+ = \lambda _ = 1$ such that $\Gamma _{\mathrm {BP}}(u) = E$ . Furthermore, we can take u to be such that $\Gamma ^+$ is the reflection of $\Gamma ^$ across the hyperplane $\{x_n =0\}$ (so that this solution is ‘symmetric’ in the sense of [Reference DePhilippis, Spolaor and Velichkov19]).
For those familiar with almostarea minimizers, this theorem may seem trivial; indeed any graph of a $C^{1,\alpha }$ function is almostarea minimizing. However, it is not the case that if $\partial \{u \neq 0\}$ is locally given by a (union of) smooth graphs and u is smooth, except for jumps along $\partial \{u \neq 0\}$ , then u is an almostminimizer to the energy in equation (1.1). Indeed, almostminimizers must satisfy additional nondegeneracy and boundedness conditions in addition to a condition on their normal derivative at $\partial \{u \neq 0\}$ (c.f. Lemma 5.3 below).
The essence of Theorem 1.7 is that we are able to construct almostminimizers to equation (1.1) whose free boundaries are given by the graphs of any two smooth functions $f^ \leq f^+$ over $\mathbb R^{n1} \subset \mathbb R^n$ . These almostminimizers are given by a regularized distance (see equation (5.9) below) developed by the first author, with Feneuil and Mayboroda, to characterize the geometry of sets of high codimension using degenerate PDE (see, e.g., [Reference David, Feneuil and Mayboroda14]). This connection is surprising, but essentially is due to the fact that regularized distances satisfy the same growth conditions as almostminimizers and, due to the results of [Reference David, Engelstein and Mayboroda11], one can prescribe their normal derivatives along the set on which they vanish.
2 Slice minimizers and uniform Lipschitz continuity
Let $I_N:=[3N,3N]$ , with N very large, and consider $\overline {R}_N:=I_N\times [1, 1]\subset {\mathbb {R}}^2$ . To define our boundary conditions on $I_N\times \{1\}$ and $I_N\times \{1\}$ , we fix some $\alpha \in (0,1)$ small but universal ( $\alpha = 1/10$ will do) and define
(see Figure 1).
We then let
that is, a minimizer to the Alt–Caffarelli–Friedman function with boundary values $\pm f_N(x)$ . We should note that we are abusing the argmin notation slightly, as this minimizer is not necessarily unique, but we can pick any minimizer for the analysis below. We should also note that we are not prescribing Dirichlet data on the ‘vertical’ parts of the boundary. However, an existence and regularity theory for minimizers given ‘partial Dirichlet data’ exists (see, e.g., [Reference Alt, Caffarelli and Friedman2]) or we could prescribe data on $\{\pm 3N\} \times [1,1]$ that simply linearly interpolates between $f_N(\pm 3N)$ and $f_N(\pm 3N)$ .
When the precise value of N is not important we will suppress it from the notation.
In order to study $u_N$ we will introduce the ‘slice minimizer’, $v_N$ , defined as follows: For each $x\in I$ , $v_N(x, )$ is the uniqueFootnote ^{3} minimizer of the onedimensional functional
under the constraint $v_N(x,\pm 1)=\pm f_N(x)$ .
Notice first that, for $x\in I$ , the set $\{v_N(x, ) =0\}$ is an interval. Indeed, if $y_1$ and $y_2$ are the first/last points where $v_N$ vanishes, replacing v by $0$ on the interval $[y_1,y_2]$ yields an admissible candidate w with $\int _{1}^1(w'(y))^2dy \le \int _{1}^1(v')^2dy$ and $\{w\neq 0\}\le \{v\neq 0\}$ , with strict inequality, unless $v_N\equiv 0$ on $[y_1,y_2]$ . Moreover, $v_N$ is harmonic in the open set $\{v_N\neq 0\}\cap (1,1)$ , that is, $v_N$ is locally affine. A straightforward calculation, which we defer to the appendix (c.f. Sections 6.1.1 and 6.1.2), allows us to explicitly calculate the slice minimizer and its energy for each $x\in [3N, 3N]$ .
Lemma 2.1. Let $v(y)=v_N(x,y)$ be the minimizer of $H_x$
with $v(x,\pm 1)=\pm f_N(x)$ .
Case 1. When $f_N(x)\ge 1$ , $v(y)=yf_N(x)$ , for $y\in [1,1].$
Notice that in this case, $H_x(v)=2f_N(x)^2+2.$

Case 2. When $f_N(x)<1$ , $v(y)=\text {sgn}(y)(y1+f_N(x))_+.$
In this case, $H_x(v)=4f_N(x).$
A crucial observation is that even though we built $v_N$ ‘slice by slice’, its xderivative still has small $L^2$ norm.
Lemma 2.2. We have
Proof. Notice that $v_N(x,y)$ is not harmonic in $\{v\neq 0\}$ , hence $v_N$ is not a minimizer of $J(, R_N)$ . However, $\frac {\partial v_N}{\partial x}$ exists a.e. in $(3N,3N)\times (1,1)$ (see Lemma 2.1), and, where it exists, $\left \frac {\partial v_N}{\partial x}\right \le f_N'(x)\le \frac {1+\alpha }{N}\le \frac {2}{N}.$
Since $f(x)\equiv 1\alpha $ for $x<N$ and $f(x)\equiv 2$ for $2N<x<3N$ , $\left \frac {\partial v_N}{\partial x}\right =0$ on $(N,N)\times (1,1)$ , on $(2N,3N)\times (1,1)$ and on $(3N,2N)\times (1,1)$ . Consequently,
Using the fact that $v_N$ has smaller energy than $u_N$ ‘slice by slice’, but larger energy overall, we can transfer equation (2.3) to $u_N\kern1pt{:}$
Lemma 2.3. Any $u_N$ which minimizes J in $\overline {R}_N$ with $u_N(x,\pm 1)=\pm f_N(x)$ , satisfies
Proof. Since $u_N$ is a minimizer of J,
Moreover, for every $x\in [3N,3N]$ fixed, $v_N(x,\cdot )$ is a minimizer of $H_x$ , hence
Integrating the last inequality on $[3N,3N]$ leads to
Combining equations (2.5) and (2.6), we conclude
Our next goal is to prove a uniform Lipschitz bound on $u_N$ . To do so, we will compare it with $v_N$ . Since the latter minimizes the energy on each slice, it will be convenient to integrate this ‘slicebyslice’ energy across all values of x.
Definition 2.4. Define the ‘total sliced energies’ of a function w by
and with $Q = [a,b] \times [1, 1]\subset \overline {R}_N$ ,
The following lemma encapsulates the fact that $u_N$ is a minimizer and $v_N$ is a slice minimizer, written in the language of total slice energy.
Lemma 2.5. We have
Proof. Let $w\in W^{1,2}(\overline {R}_N)$ . Notice that
Using equation (2.10) and the fact that $H_x(v_N(x,\cdot ))\le H_x(u_N(x,\cdot ))$ a.e., we obtain
Combining this with the equality in equation (2.10) and with equation (2.3), we conclude that if $u_N$ is a minimizer of J, then
We can also localize these estimates to $Q=[a,b]\times [1,1]$ .
Lemma 2.6. We also have
Proof. Given a subset $X\subset [3N,3N]\times [1,1]$ and defining
we obtain $S(u_N)=S_Q(u_N)+S_{{\overline {R}_N}\setminus Q}(u_N)$ . If we had $S_Q(u_N)>S_Q(v_N)+\frac {16}{N}$ , then
Since for a.e. $x\in [3N,3N]$ we have $H_x(v_N(x,\cdot ))\le H_x(u_N(x,\cdot ))$ , integrating this inequality we obtain $S_{{\overline {R}_N}\setminus Q}(v_N)\leq S_{{\overline {R}_N}\setminus Q}(u_N)$ . Together with equation (2.12), this gives
contradicting equation (2.9).
With these estimates, we are almost ready to prove the (uniform) Lipschitz continuity of the $u_N$ on compact subsets of ${\overline {R}_N}$ . We introduce the following notation, by analogy with Definition 2.4, for $Q = [a,b] \times [1,1]$
Our first result is an immediate corollary of Lemma 2.6 and equation (2.4).
Corollary 2.7. There exists a constant $C_0> 0$ such that for any $N> 0$ and any $Q = [a,b]\times [1,1] \subset \overline {R}_N$ we have
Proof. From Lemma 2.6, we have that $S_Q(u_N) \leq S_Q(v_N) + \frac {16}{N}$ and from equation (2.4) we have $\iint _Q \partial _x u_N^2\, dA \leq \frac {16}{N}$ . Putting this together, we get that
Thus, it suffices to show that $S_Q(v_N)$ grows proportionally to $ba$ with a constant of proportionality independent of $a,b, N$ . Indeed, by Lemma 2.1 $H_x(v_N) \leq \max \{2f_N(x)^2 + 2, 4f_N(x)\} \leq 10$ . Integrating that across $[a,b]$ gives the desired result.
From here, we can conclude the main result of this section, the uniform Lipschitz continuity.
Theorem 2.8. For $0 < \delta < 1$ , we can find constants $L \geq 1$ and $N_0> 0$ such that
for $N \geq N_0$ , where $\Omega _N = (3N+\delta , 3N\delta ) \times (1+\delta , 1\delta )$ .
Proof. As $u_N$ is a minimizer, we can apply [Reference David and Toro10, Theorem 8.1] (c.f. the discussion at the bottom of page 504 in [Reference David and Toro10]) which gives a Lipschitz bounds on almostminimizers depending only on the distance from the boundary and the $L^2$ norm of the gradient (see also [Reference David, Engelstein and Toro13, Remark 2.2]). Corollary 2.7 gives uniform bounds on the $L^2$ norm of the gradient of $u_N$ inside of any rectangle and a covering argument finishes the proof.
3 Existence of a zero set
The goals of this section are two fold. First to prove that $\{u_N = 0\}> 0$ (which will follow from Lemma 3.5) and second to show that the set $\{u_N = 0\}$ does not get too close to the boundary of $R_N$ (Lemma 3.4 and Corollary 3.7).
Let us first describe the zero set of $v_N$ , the ‘slice minimizer’.
Lemma 3.1. The following holds regarding the set $\{v_N(x,y)=0\}$ :

○ When $x\ge \frac {(1+2\alpha )N}{1+\alpha }$ , $v_N(x,y)=0$ only when $y=0$ .

○ When $x\le N$ , $v_N(x,y)=0$ for $y\le \alpha $ .

○ When $N<x<\frac {(1+2\alpha )N}{1+\alpha }$ , $v_N(x,y)=0$ when $y\le 2\alpha +1\frac {x(1+\alpha )}{N}$ .
Proof. The result follows from the following simple observations:

○ When $x\ge \frac {(1+2\alpha )N}{1+\alpha }$ , $f_N(x)\ge 1$ . In this case $v_N(x,y)=yf_N(x)$ .

○ When $x\le N$ , $f_N(x)\equiv 1\alpha $ , and $v_N(x,y)=\text {sgn}(y)(y\alpha )_+$ .

○ In the remaining interval, $f_N(x)=\frac {x(1+\alpha )}{N}2\alpha $ and $v_N(x,y)=\text {sgn}(y)(y1+\frac {x(1+\alpha )}{N}2\alpha )_+$ .
We expect a minimizer $u_N$ of J, taken among all functions $w\in W^{1,2}({\overline {R}_N})$ with $w(x,\pm 1)=\pm f_N(x)$ ,Footnote ^{4} to look similar to $v_N$ . In particular, we want to extract information about its zero set and prove that $\{u_N=0\}$ has a ‘pool’ close to $0$ .
Before we can prove this closeness, we need to observe that our minimizer is nice on ‘most’ of the vertical slices.
Definition 3.2. Let $X_0\subset I$ be the smallest set such that $x\notin X_0$ implies that $u_N(x,\cdot )\in W^{1,2}([1,1])$ and $\lim \limits _{y\rightarrow \pm 1}u_N(x,y)=\pm f_N(x)$ .
Since $u_N \in W^{1,2}(R_N)$ , we note that $X_0$ has measure zero.
We now show that the zero set of $u_N$ does not get too close to the ‘top’ or ‘bottom’ of the rectangle. We start by showing that if $u_N$ is small close to the top or bottom of the rectangle, then that slice has large energy.
Lemma 3.3. Let $\varepsilon \in (0,1), \delta \in (0,1/2)$ and assume that $u_N(x, y) < \delta $ for some $x\in [3N, 3N]\backslash X_0$ and some y with $1y < \varepsilon $ . Then $H_x(u_N(x,)) \geq \frac {(f_N(x)  \delta )^2}{\varepsilon }$ .
In particular, if $x \in I \setminus X_0$ and there exists y such that $1y < \frac {1}{44}$ and $u_N(x,y) < \frac {1}{4}$ , then $H_x(u_N(x, )) \geq H_x(v_N(x,)) + 1$ .
Proof. Without loss of generality, assume $u_N(x,)$ is both equal to $f_N(x)$ and $\delta $ on an interval of length $\varepsilon $ . The lowest energy way to do this is assuming these values are achieved at the endpoints of the interval and that $u_N$ interpolates between them linearly. Thus, $H_x(u_N(x, )) \geq \frac {(f_N(x)  \delta )^2}{\varepsilon }$ . The second result follows from the first once we remember that $f_N(x) \geq \frac {3}{4}$ for all x, and $H_x(v_N(x,)) \leq 10$ for all x.
We are now ready to show the existence of a strip near the top and bottom of R, on which u cannot vanish. We actually show something stronger which is that u is quantitatively large in this strip.
Lemma 3.4. Let $\delta> 0$ and set
(the two blue zones in the picture). Then there exists an $N_0 = N_0(\delta )> 1$ such that if $N> N_0$ then
Proof. Fix $\delta> 0$ . Recall from Theorem 2.8 that there exists $L> 0$ (independent of N but dependent on $\delta> 0$ ) such that if
then $\u_N\_{\mathrm {Lip}(\Omega )} \leq L$ .
We first check that $u_N(x_0,y_0) \geq \frac 18$ on $R_\pm $ . Note that, if $u_N(x_0,y_0) < \frac 18$ for some $(x_0,y_0) \in [3N + \delta , 3N\delta ]\times ([1 +\frac {1}{88}, 1+\frac {1}{44}]\cup [1\frac {1}{44}, 1\frac {1}{88}])$ , then by Lipschitz continuity there exists an interval $I \subset [3N + \delta , 3N\delta ]$ of length $\frac {1}{8L}$ such that $u(x,y_0) < \frac {1}{4}$ for all $x\in I$ .
We apply Lemma 3.3 to conclude that for almost every $x\in I$ we have $H_x(u_N(x,)) \geq H_x(v_N(x,)) + 1$ . If $Q = I \times [1,1]$ , then this implies that $S_Q(u_N) \geq S_Q(v_N) + I = S_Q(v_N) + \frac {1}{8L}.$ Of course, this contradicts equation (2.11) as long as $\frac {16}{N} < \frac {1}{8L}\Leftrightarrow 128L < N$ .
Now, we check that u has the right sign on $R_\pm $ . Suppose for instance that $u(x,y) \leq 1/8$ somewhere on $R_+$ . Since u is continuous, $u(x,y) \leq 1/8$ everywhere on $R_+$ . Then for all $x \in \left [3N + \delta , 3N\delta \right ] \setminus X_0$ , $u(x,)$ is a Sobolev function that goes from $1/8$ to at least $1/4$ in an interval of length at most $1/88$ , a direct computation shows that $H_x(u_N(x,)) \geq \frac {3}{8} \times 88 \geq 33$ , and we reach a contradiction as above.
Now, we show that $\{u_N = 0\}$ must be contained in a strip around $\{y=0\}$ when $x < N$ . Actually, we prove something more precise.
Lemma 3.5. There exists an $N> 1$ large enough such that if $x < N1$ , and $u_N(x,y)> 0$ , then $y> \alpha /8$ . Similarly, if $u_N(x,y) < 0$ then $y < \alpha /8$ .
Proof. Assuming by contradiction this were not the case, without loss of generality there would exist a point in $(x_0, y_0)$ with $x_0 < N1, y_0 \leq \alpha /8$ and $u_N(x_0, y_0)> 0$ .
By continuity of $u_N$ , the connected component of $\{u_N> 0\}$ containing $(x_0, y_0)$ must be separated from $\{y = 1\}$ by $\{u_N = 0\}$ . This implies that there is a connected subset of $\{(x,y)\mid x \leq N, u_N(x,y) = 0\}$ which touches the sets $\{x = N\}$ and $\{x= N\}$ and which separates $(x_0, y_0)$ from $\{y =1\}$ . By Lemma 3.4, this connected component cannot intersect the set $\{(x,y) \mid x < N, 1+1/44> y > 1+1/88\}$ . If $y_0 < 1+1/44$ , then this connected component lies below $y = 1+1/88$ and the length of its projection onto the xaxis is at least $2N$ . Such a configuration has too much energy by Lemma 3.3, and thus we can assume $y_0> 1+1/44$ .
We can also assume that $x_0 \notin X_0$ (since $\{u_N> 0\}$ is open), so by continuity of $u_N$ on the slice $\{x = x_0\}$ and Lemma 3.4 there exists a point $(x_0, \tilde {y})$ with $1+1/44<\tilde {y} < y_0$ and $u_N(x_0, \tilde {y}) = 0$ and $(x_0, \tilde {y}) \in \partial \{u_N> 0\}$ .
We note that $\{u_N> 0\}\cap ([N, N]\times [1+1/88, 11/88])$ is a locally nontangentially accessible (NTA) domain, uniformly in N (i.e., for any $K \subset \subset ([N, N]\times [1+1/88, 11/88])$ , $\{u_N>0\}$ satisfies the corkscrew conditions at $Q\in \partial \{u_N> 0\}\cap K$ with constants and at scales that depend only on K not N c.f. [Reference David, Engelstein and Toro13, Theorem 2.3])). In particular, there exists a point $(x_1, y_1) \in \{u_N> 0\}$ such that $\(x_1, y_1)  (x_0, \tilde {y})\ \leq r_0 = r_0(K) \leq \alpha /8$ and $\mathrm {dist}((x_1, y_1), \{u_N \leq 0\}) \geq r_0/M$ for some $M> 1$ (where both $r_0, M$ are independent of N large).
By the nondegeneracy of $u_N$ (c.f. [Reference Alt and Caffarelli1, Lemma 3.4]) and the Lipschitz continuity of $u_N$ , Theorem 2.8, there exists a constant $C> 1$ (again uniform for large N) such that $u_N \geq r_0/C$ in the ball $B((x_1, y_1), r_0/(3CL))$ (where L is the Lipschitz constant).
Therefore, there exists an interval $I \subset [N, N]$ of length $2r_0/(3CL)$ such that for each $x\in I$ there exists a $y < \alpha /2$ with $u_N(x, y)> r_0/C$ . Invoking the computations of Section 6.2 (c.f. Claim 6.1), we get that $H_x(u_N(x,)) \geq H_x(v_N(x, )) + \eta $ for every $x\in I \backslash X_0$ , where $\eta = \eta (\alpha , C, r_0)> 0$ is independent of N.
Let $Q = I \times [1,1]$ ; we get (from equation (2.11))
This gives a contradiction if $N> 0$ is large enough (since $I = 2r_0/(3CL), \eta> 0$ are independent of N).
So we have a good control on where $\{u_N = 0\}$ is in the central region. Before we end this section, it behooves us to refine the result of Lemma 3.3, with the goal of showing that when $f_N(x) \geq 1$ , we can actually confine $\{u_N = 0\}$ to an arbitrarily thin strip around the line $\{y=0\}$ . This will be used in the next section to show that there are no onephase points on the sides of $R_N$ . We first estimate the difference between $v_N(x,)$ and near minimizers for $H_x$ .
Lemma 3.6. Let $x\in (3N, 3N)$ be such that $f_N(x) \geq 1$ , and let $\varepsilon \in (0,1)$ . Let $w\in W^{1,2}([1,1])$ , with $w(\pm 1) = \pm f_N(x)$ , and assume $H_x(w)\leq H_x(v_N(x,)) + \varepsilon $ . Then, there exists a $C> 0$ (uniform over the choice of $x, \varepsilon $ above) such that
Proof. Recall that for x as in the statement of Lemma 3.6, we have $v_N(x,y) = yf_N(x)$ . Our plan is to first modify w, reducing energy, and show the desired inequality for w, then estimate the $L^\infty $ distance between the original w and our modified functional. We know that $w\in W^{1,2}([1,1])$ , so by Sobolev embedding we have that w is Hölder continuous and thus
We construct
We observe that $H_x(w) \geq H_x(\hat {w})$ (as we have enlarged the zero set and minimized Dirichlet energy where w is positive).
We can compute that
Recall that $H_x(v_N(x, )) = 2f_N^2(x) + 2$ and rewrite
where the last inequality follows because $f_N(x) \geq 1$ in the salient range and $\frac {1}{1b} + \frac {1}{a+1}  2 \geq 0$ as long as $1> b \geq a >1$ .
We compute that $F(0,0) = 0, \nabla F(0,0) = (0,0)$ and that
is a diagonal matrix with entries between $10^7$ and $\frac {1}{10^7}$ as long as $[a_0,b_0] \in \left [\frac {99}{100}, \frac {99}{100}\right ]$ . If either of $a_0, b_0$ is outside that range, then Lemma 3.3 gives a contradiction to the assumption on energy. Thus, by the Taylor remainder theorem (and the fact that F is $C^2$ as long as a stays away $1$ and b stays away from $1$ ) we have that
for some $(a_0, b_0)$ on the segment connecting $(0,0)$ and $(a,b)$ .
On the other hand, since $v_N(x,)$ is linear in y and $\hat {w}$ is piecewise linear in y we can see that
Chaining everything together, we get that
We now estimate $\hat {w}w(t_0)$ for $t_0 \in [1,1]$ . We have two cases; in the first, assume that $t_0 \in [a,b]$ . Then $2\hat {w}(t_0)  w(t_0) = 2w(t_0) \leq \int _a^{t_0}w_y + \int _{t_0}^b w_y$ . Using Jensen’s inequality, we get that
Putting this together, we have that
In the second case, we assume that $t_0 \in [1, a]$ (the case that $t_0 \in [b, 1]$ works the same way). Since $w(1) = \hat {w}(1)$ and $w(a) = \hat {w}(a)$ we get that $2w(t_0)  \hat {w}(t_0) \leq \int _{1}^a \partial _y(w \hat {w})\, dy$ . Again applying Jensen’s inequality, we get that
Expanding out the integrand and using the fact that at $w(a)  w(1) = f_N(x)$ we get
We can chain the inequalities together as above to get the desired result.
From here, we have an easy corollary: Outside of the central box, the zero set of $u_N$ is contained in a very thin strip around $\{y =0\}$ .
Corollary 3.7. Let $\delta , \theta> 0$ . There exists an $N_0 = N_0(\delta , \theta )> 0$ such that for $N> N_0$ and every pair $(x,y)\in R_N$ such that $x < 3N  \delta $ , $y \leq 1\frac {1}{88}$ , $f_N(x) \geq 1$ , and $u_N(x,y) = 0$ , we have $y < \theta $ .
Proof. Fix $\delta> 0$ , and let $N> 0$ be big enough so that, invoking Theorem 2.8, we can say $u_N$ is LLipschitz in $\left [3N + \delta , 3N  \delta \right ]\times \left [1+\frac {1}{44}, 1\frac {1}{44}\right ]$ .
Assume there exists a point $(x_0, y_0)$ such that $f_N(x_0) \geq 1$ , $\theta \leq y_0 \leq 1\frac {1}{88}$ and $u_N(x_0, y_0) = 0$ . We know that $y_0 < 1\frac {1}{44}$ , by Lemma 3.4. Then there exists an interval I of length at least $\frac {\theta }{2L}$ such that $u_N(x, y_0)  y_0f_N(x) \geq \frac {\theta }{2}$ and $f_N(x) \geq 1$ on all $x\in I$ .
By Lemma 3.6, this implies that $H_x(u_N(x,)) \geq H_x(v_N(x,)) + \frac {\theta ^4}{4C}$ for almost every $x\in I$ . Integrating and letting $Q = I\times [1,1]$ , we get that $S_Q(u_N) \geq S_Q(v_N) + C^{1}\theta ^5$ , where $C= C(\delta )> 0$ is independent of $N, \theta $ . This contradicts equation (2.11) as long as N is large enough (depending on $C, \theta $ and thus on $\delta , \theta $ ).
4 The proof of Theorem 1.2: Ruling out onephase points
The main goal of this section is to finish up the proof of our main Theorem 1.2, that $\Gamma _{\mathrm {BP}}(u_N) \neq \emptyset $ . In fact, we have the following more precise description.
Theorem 4.1. Let $u_N, R_N, f_N$ be as above. Then, for N large enough, there exists a ‘pool of zeroes’, that is, a connected open set $\mathcal O \subset \{u_N =0\}\cap \{x < 2N+1\} \cap \{ y \leq 1\frac {1}{44}\}$ such that $\mathcal O> 0$ , $\partial {\mathcal O}$ is contained in the free boundary $\Gamma ^+ \cup \Gamma ^$ , and $\partial {\mathcal O}$ meets $\Gamma ^+$ , $\Gamma ^$ , and the set, $\Gamma _{\mathrm {TP}}(u)$ , of branch points.
The idea is that Lemma 3.5 guarantees the existence of a ‘pool of zeroes’ separating the positive and negative phases in the central part of $R_N$ . We then want to show that this pool does not ‘leak’ to the sides of $R_N$ . For this, we need the following lemma, whose proof will be the main goal of this section.
Lemma 4.2. Let
There exists a $N_0> 1$ such that if $N \geq N_0$ , then every free boundary point for $u_N$ in $R_{ext}$ is a twophase point. Or, put another way:
Before we prove the lemma, let us see how its proof implies the theorem.
Proof of Theorem 4.1 assuming Lemma 4.2
By Lemma 3.5, $u_N$ vanishes in the region, where $x \leq N1$ and $y \leq 1/8$ . Denote by ${\mathcal O}_0$ the interior of $\{ u_N = 0 \}$ , and let $\mathcal O$ the connected component of $B(0, 1/8)$ in ${\mathcal O}_0$ .
Let us first check that $\partial \mathcal O$ contains onephase points of both types. Consider the line segment $\ell $ from the origin to $z_+ = (0,1/44)$ ; we know that $u(z_+) \geq 1/8$ (by equation (3.2)), so $\ell $ meets $\partial \mathcal O$ . At the first point of intersection $z=(0,y)$ (going up from $0$ ), Lemma 3.5 says that $y> \alpha /8$ , and then Lemma 3.5 says that $u(w) \geq 0$ for w near z; hence, z is a (positive) onephase point. Similarly, the first point of $\partial \mathcal O$ on the interval from $0$ to $z_ = (0,1/44)$ is a negative onephase point.
Next, we want to show that ${\mathcal O} \cap R_{ext} = \emptyset $ . Suppose not, and let $\gamma $ be a path in ${\mathcal O}$ that goes from $0$ to some point of $\mathcal O_0\cap R_{ext}$ . Certainly, $\gamma $ does not get close to the top and bottom boundaries, that is, where $y = 1\frac {1}{88}$ , by equation (3.2). So there is a point $(x_0,y_0)\in \gamma $ such that $x_0> 2N+1$ and $y_0 \leq 11/44$ . Then, as above with the origin, we can find a point $P = (x_0,y_0')$ above $(x_0,y_0)$ , which lies in $\partial \mathcal O$ . This vertical segment lies inside of $\mathcal O$ (and is nonempty by the openness of $\mathcal O_0$ ). By Lemma 4.2, this point is a twophase point. But in fact the proof of Lemma 4.2 will say more: near P, the free boundary is a Lipschitz graph, with a small constant, and then the nondegeneracy of u shows that on the vertical line that goes through P, u is (strictly) positive on one side of P, and negative on the other side; this contradicts the fact that the open segment between $(x_0,y_0)$ and P lies in $\mathcal O$ . Hence, ${\mathcal O} \cap R_{ext} = \emptyset $ . Note this, with Lemma 3.4, implies that
We still need to show that ${\mathcal O}$ contains a branch point. Suppose not, and let $z\in \partial \mathcal O$ be given. Obviously, u takes nonzero values near z, so z lies in the free boundary, and by assumption z is a onephase point (since only onephase points or branch points can be on the boundary of an open subset of $\{u =0\}$ ). Suppose $z \in \Gamma ^+(u)$ . In the present situation (and even in ambient dimension $3$ ), the free boundary in a neighborhood of z is a smooth hypersurface $\Gamma $ , with $u> 0$ on one side of $\Gamma $ , and $u=0$ on the other side. Thus, $B_r(z) \cap \partial \mathcal O \subset \Gamma ^+(u)$ for some $r> 0$ small enough (depending on z).
More globally, the curve $\Gamma \subset \partial \mathcal O$ that contains z is a Jordan curve (it is disjoint from $R_{ext}$ , does not touch the boundary of R and is locally smooth), so $\mathcal O$ , which is connected, is contained in one of the two components of U = ${\mathbb {R}}^2 \setminus \Gamma $ . If $\mathcal O$ is contained in the unbounded component of U, then we can replace u with $0$ on the bounded component of U and keep u a valid competitor (because $\Gamma \subset \partial \mathcal O \subset R_{in} \subset \subset R_N$ ). However, this will strictly decrease energy which is a contradiction. Thus, $\mathcal O$ is contained in the bounded component of U. Arguing as before, if $\mathcal O$ is not the entirety of this bounded component, then we could replace u by $0$ on the rest of this bounded component and decrease energy. So it must be that $\mathcal O$ is one of the connected components of $\mathbb R^2 \setminus \Gamma $ .
This contradicts the fact that $\partial {\mathcal O}$ meets $\Gamma ^$ too. So $\partial {\mathcal O}$ contains a branch point, and the theorem follows from the proof of the lemma.
In ambient dimension $3$ , the same proof would work, using the fact that a connected smooth orientable hypersurface in ${\mathbb {R}}^n$ always separates ${\mathbb {R}}^n$ in exactly two connected component, as in the Jordan curve theorem.
The rest of this section will be devoted to the proof of Lemma 4.2. We begin by observing that when $x> 2N$ , $\partial _y u_N(x,)$ must be close to $\partial _y v_N(x,) = 2$ at most points.
Lemma 4.3. For every $\varepsilon> 0$ , there exists an $N_0 = N_0(\varepsilon )> 0$ such that, if $N \geq N_0$ , then
Proof. Recall that $f_N(x) = 2$ for $2N \leq x \leq 3N$ , so $v_N(x,y) = 2 y$ and $\partial _y v_N(x,) = 2$ . Also, $y\mapsto v_N(x,y)$ is harmonic on $[1,1]$ and (when $x \notin X_0$ ) $y \mapsto u_N(x,y)$ is a $W^{1,2}$ function with the same trace as $v_N$ . Fix such an x, and set $I(w) = \int _{1}^1 w_y^2\, dy$ for $w\in W^{1,2}([1,1])$ . Amongst $w\in W^{1,2}([1,1])$ with the same boundary values $\pm 2$ as $v_N(x,)$ , $v_N(x,)$ minimizes I. Thus, the Euler–Lagrange equation shows that $\int _{1}^1 \partial _y v_N (\partial _y u_N\partial _y v_N) dy=0$ . Hence, by Pythagoras
Now, we also care about the functional $H_x$ , so we need to add $\{y \mid w(y) \neq 0\}$ to $I(w)$ . For $v_N(x,)$ , we get $2$ , since $v_N(x,y)=2y$ only vanishes at $0$ . For $u_N(x,)$ , Corollary 3.7 says that, for any given $\theta> 0$ , and if N is large enough $u_N()$ can only vanish for $y < \theta $ . Then $2 \{y \mid u_N(x,y) \neq 0\} \leq 2\theta $ and
We now integrate over x such that $2N < x < 3N\frac {1}{4}$ , use equation (2.9) and get the desired result for $N\ge N_0(\varepsilon )$ .
We now need to invoke the results of [Reference De Philippis, Spolaor and Velichkov20] to show that the free boundary is smooth and that the positive and negative parts of $u_N$ extend smoothly to the free boundary. Recall that $u_N^+ = \max \{u_N, 0\}$ and $u_N^ = \max \{u_N, 0\}$ .
Lemma 4.4. Let $\delta> 0$ . There exists $N_0 = N_0(\delta )> 1$ such that if $N \geq N_0$ , then each $\partial \{u^{\pm }_N>0\} \cap (\{ 2N< x < 3N1\}\times [1,1])$ is a $C^{1,1/4}$ graph over the set $\{y = 0\}$ with norm $\leq \delta $ . Furthermore,
with a $C^{0,1/4}$ seminorm less than $1$ .
Proof. Pick some $r_0> 0$ small, but independent of $N, \delta $ . For points $x_0$ such that $B(x_0, r_0) \cap \Gamma _{\mathrm {TP}}(u_N) = \emptyset $ , the result follows from uniform Reifenberg flatness (Corollary 3.7) and standard ‘flatimpliessmooth’ regularity for the onephase problem (c.f. [Reference Alt and Caffarelli1]).
If $B(x_0, r_0) \cap \Gamma _{\mathrm {TP}}(u_N) \neq \emptyset $ , then we can consider $y\in B(x_0, r_0) \cap \Gamma _{\mathrm {TP}}(u_N)$ , and look at $B(y, 2r_0)$ . Then the regularity follows from the $\theta $ Reifenberg flatness at scale 1 (Corollary 3.7), the uniform Lipschitz continuity (Theorem 2.8) and [Reference De Philippis, Spolaor and Velichkov20, Theorem 3.1].
In both instances, the dependence on $r_0, \theta $ is such that the $C^{1,1/4}$ norm of the graph(s) goes to zero as $r_0> 0$ stays constant but the Reifenberg flatness parameter, $\theta $ , goes to zero. Corollary 3.7 says we can take $\theta $ arbitrarily small, at the expense of making N larger. The regularity of the gradient is a consequence of standard elliptic regularity once we know the regularity of the free boundary.
We are finally ready to prove Lemma 4.2. Our proof uses harmonic analysis to take advantage of the fact that $u(x,y)$ is close to $2y$ in an integral sense (c.f. Lemmas 4.3 and 4.4). One might try a barrier argument instead, but it was not clear (to us) how to gain the necessary pointwise control on the boundary of a subdomain to use the maximum principle.
Proof. Let $\varepsilon> 0$ be small, to be chosen later and assume by contradiction that there exists a onephase point $(x_0, y_0) \in \partial \{u_N> 0\}\cap \{2N\leq x \leq 3N1\}$ . By Lemma 4.3, we may choose N large enough (depending only on $\varepsilon $ ) such that for any square, Q, of side length $\ell (Q) \leq 3/4$ centered at the point $(x_0,y_0)$ we have
Using Fubini, we also get that
where the integration is occurring over squares all centered at $(x_0, y_0)$ with increasing side lengths. Thus, we can pick a cube $Q_0$ (which will be fixed going forward) with $1/2 \leq \ell (Q_0) \leq 3/4$ such that
Note that $u_N_{Q_0}$ is LLipschitz (by Theorem 2.8), where L is independent of $N, Q_0$ .
Let $\tilde {Q} = Q_0\cap \{u_N> 0\}$ . By Lemma 4.4, the domain $\tilde {Q}$ is piecewise $C^{1,1/4}$ and is an NTA domain, with constants uniform in N (c.f. [Reference Jerison and Kenig25] for definitions and details). Let us say a bit more about this; the NTA constants depend on how the vertical edges of Q touch the smooth graph $\partial \{u_N> 0\}$ . However, this graph over $\{y = 0\}$ has very small norm (uniform in N), so this intersection happens (quantitatively) transversely and thus the NTA constants are also uniform in N. These bounds on norm of the graph which gives $\partial \{u_N> 0\}$ also imply that $4 \geq \mathcal H^1(\tilde {\partial Q}) \geq 1$ and $\tilde {Q} \geq \frac {1}{8}$ . From this information, using equation (4.2) and Chebyshev, there exists a $A \in \tilde {Q}$ with $\mathrm {dist}(A, \partial \tilde {Q})> \frac {1}{100}$ and $\partial _y u(A)  2^2 < 8\varepsilon $ (this will work as long as $\varepsilon> 0$ is small enough).
Let $\omega _N^A$ be the harmonic measure of $\tilde {Q}$ with a pole at A (where the notation emphasizes the dependence on N). Since $\partial _y u$ is a harmonic function in $\tilde {Q}$ , we have the following integral representation:
Finally, it will be convenient to let $\partial \tilde {Q} = \Gamma _1 \cup \Gamma _2$ , where $\Gamma _1 = \partial \{u_N> 0\} \cap \overline {Q}_0$ and $\Gamma _2 = \partial Q_0 \cap \overline {\{u_N> 0\}}$ ; see the picture below.
Recall that $(x_0, y_0)$ is a onephase point for $u_N$ . Thus, we can compute $\partial _y u_N(x_0, y_0) \leq \partial _\nu u_N(x_0, y_0) = 1$ , where the last equality holds due to the free boundary condition at (regular) onephase points. Because the derivative of $u_N$ restricted to $\partial \{u_N> 0\}$ has $C^{0, 1/4}$ seminorm less than 1, we also have that
Putting things together, we have that
where in the last line we used that $\omega _N^A(\Gamma _2) = 1\omega _N^A(\Gamma _1)$ and also Bourgain’s Lemma (c.f. [Reference Jerison and Kenig25, Lemma 4.2]), which implies that there is a constant $\tilde {c}> 0$ (independent of $\varepsilon , N$ ) such that $\frac {\omega _N^A(\Gamma _1)}{1\omega _N^A(\Gamma _1)} \geq \tilde {c}$ and $\tilde {c}^{1} \geq \frac {1}{1\omega _N^A(\Gamma _1)}$ .
Recall that u is $L$ Lipschitz, and write $\Gamma _2 = \Gamma _{2,+} \cup \Gamma _{2,}$ , with
After overestimating over each set, equation (4.3), gives us
Once more invoking Bourgain’s theorem, we have that
is bounded strictly away from zero, independently of N (large enough) or $\varepsilon> 0$ .
Using the condition on the integral of $\partial _y u_N  2$ on $\partial Q_0$ (i.e., equation (4.2)), we see that
Recall that $\mathcal H^1(\partial \tilde {Q}) \geq 1$ , and we get that
We now recall that in $\tilde {Q}$ we have that the harmonic measure $\omega _N^A \in A_\infty (\mathcal H^1)$ (see, e.g., [Reference David and Jerison9, Theorem 2]). In fact, the $A_\infty $ constants depend on the NTA constants of $\tilde {Q}$ , the $1$ Ahlfors regularity of $\partial \tilde {Q}$ , the distance from A to $\partial \tilde {Q}$ and the diameter of $\tilde {Q}$ (for more details and definitions of the relevant terms, see [Reference David and Jerison9]). As discussed above, all of these quantities can be taken uniform for N large enough. Thus, we can take $\varepsilon> 0$ small until equation (4.5) contradicts $\omega _N^A \in A_\infty (\mathcal H^1)$ , and we are done.
Remark 4.5. The arguments above can be adapted to produce cusp points in $\mathbb R^{2+1}$ but not directly in ambient dimensions larger than $3$ . In the setting of ${\mathbb {R}}^3$ , our domain is given by $R_N := B'(0, 3N)\times [1,1] \subset \mathbb R^{2+1}$ , where $B'$ is a ball in $\mathbb R^{2}$ . Then $f_N(r, \theta ) = f_N(r): B'(0, 3N)\rightarrow \mathbb R$ depends only on the radial variable. Now, we define $f_N$ piecewise:
We can then define the slice minimizer similarly, where $v_N(r, \theta , ) \in W^{1,2}([1,1])$ minimizes with the boundary values $v_N(r, \theta , \pm 1) = \pm f_N(r)$ . In particular, when $f_N(r) \geq 1$ we have $v_N(r, \theta , y) = yf_N(r)$ , and when $f_N(r) \leq 1$ we have $v_N(r, \theta , y)= \mathrm {sgn}(y)\left (y1 + f_N(r)\right )_+$ . In either setting, we have $\partial _r v_N \leq \partial _r f_N$ .
Computing just like in Lemma 2.2, we get that
From here, we argue identically as above, noting that we never use the precise bound on $\iint _{\overline {R}_N} \left \frac {\partial v_N}{\partial r}\right ^2$ , just that it goes to zero as $N \rightarrow \infty $ , and every other quantity stays bounded.
For example, the contradiction in the proof of Lemma 3.5 now comes when $\eta \leq \frac {C}{\eta \log (2N)}$ , which is not true for $N> 1$ large enough.
5 Accumulating cusps for almost minimizers
In this section, we prove that the cusp set for almostminimizers to equation (1.1) can be essentially arbitrary. To state our results in maximum generality, we introduce the variable coefficient version of equation (1.1):
Throughout this section, we will assume that $q_{\pm } \in C^{0,\alpha }(\overline {\Omega })$ for some $\alpha \in (0,1)$ and that the weights satisfy the nondegeneracy condition; $q_{\pm } \geq c_0> 0$ in all of the domain. Clearly, we recover the original functional (1.1) by letting $q_{\pm } \equiv \lambda _{\pm }$ in equation (5.1).
We now state our main result.
Theorem 5.1. Let $f^ \leq f^+ \in C^2(\mathbb R^n)$ such that $f^+ = f^ = 0$ outside of $B(0, R/10)$ for some large $R> 0$ . Let $\Gamma ^\pm $ be the graphs of $f^{\pm }$ . For any $q_{\pm } \in C^{0,\alpha }$ , with $c_0^{1} \geq q_{\pm } \geq c_0> 0$ , there exists an almostminimizer u to the energy $J_{B(0, R)}$ such that $\Gamma ^\pm = \partial \{\pm u> 0\}$ .
Notice that for u as in Theorem 5.1 we have $\Gamma _{\mathrm {BP}}(u) = \partial _{\mathbb R^{n1}}\{f^(x) = f^+(x)\}$ . Recall that any closed set can be the zero set of a $C^2$ function (take a smoothing of the distance function to the given set, c.f. [Reference Stein29, VI, Theorem 2]). As such, we have the following corollary (compare to Theorem 1.7 from the introduction).
Corollary 5.2. Let $E \subset \mathbb R^{n1}$ be any compact set with no interior point in $R^{n1}$ , and let $q_{\pm } \in C^{0,\alpha }(\mathbb R^n)$ be nondegenerate and bounded. Then, for some $R> 0$ large enough depending on E, there exists an almostminimizer u to $J_{B(0, R)}$ with weights $q_{\pm }$ such that $\Gamma _{\mathrm {BP}}(u) = E$ .
Furthermore, we can take $\Gamma ^+ := \partial \{ u> 0\}$ to be the reflection of $\Gamma ^ := \partial \{ u < 0\}$ around $\{x_n = 0\} \subset \mathbb R^n$ .
Theorem 5.1 will follow from two lemmas, the first a general result about what functions are almostminimizers to the twophase functional. The second, a construction of such functions.
Lemma 5.3. Let, $n\geq 2, \alpha \in (0,1), f^ \leq f^+ \in C^2(\mathbb R^n)$ such that $f^ = f^+ =0$ outside of $B(0, R/10)$ for some $R> 1$ , and let $\Gamma ^{\pm }$ be the graph of $f^{\pm }$ . Let $\Omega ^{+}$ (resp. $\Omega ^$ ) be the part of $B(0, 4R)$ that lies above (resp. below) the graph of $f^{+}$ (resp. $f^$ ) for some $R> 0$ large. Let $u^{\pm } \in C^{1,\alpha }(\overline {\Omega }^{\pm })$ be such that there exists a constant $C_1> 0$ such that for $x \in \Omega ^{\pm } \cap B(0,2R)$ we have that
Then $u = u^+  u^$ is an almostminimizer to equation (5.1) inside of $B(0, R)$ , where $q_{\pm }$ are the $C^{0,\alpha }$ functions which agree with $\nabla u^{\pm }$ on $\Gamma ^{\pm }$ .
More precisely, there exists a constant $C = C(C_1, \f^\pm \_{C^2}, \u^\pm \_{C^{1,\alpha }(\overline {\Omega ^{\pm }})})> 0$ and $1> r_0 = r_0(C_1, \f^\pm \_{C^2}, \u^\pm \_{C^{1,\alpha }(\overline {\Omega ^{\pm }})}) > 0$ such that, for any ball B satisfying $\overline {B} \subset B(0, R)$ , and $r(B) \leq r_0$ we have
for any $v = u$ on $B(0, R)\backslash B$ .
Key to the proof of Lemma 5.3 is the following result which is adapted from [Reference DeSilva and Jerison24].
Lemma 5.4. Let v be a critical point to $J_B$ (associated to $q_{\pm }$ ). Assume there exists, parameterized by $t\in [a,b]$ , a family of $\phi _t: \overline {B} \rightarrow {\mathbb {R}}$ (continuous in both variables) that satisfy the following properties:

1. $\Delta \phi _t = 0$ in $\{\phi _t \neq 0\} \cap B$ .

2. $\{\phi _t = 0\} = \partial \{\phi _t> 0\} = \partial \{\phi _t < 0\}$ . Furthermore, $t\mapsto \{\phi _t = 0\}$ is continuous in the Hausdorff distance sense.

3. At every point on $\partial \{\pm \phi _t> 0\}$ , there exists a ball inside $\{\pm \phi _t> 0\}$ which touches the free boundary at that point.

4. At every $x_1 \in \partial \{\pm \phi _t> 0\}$ , we satisfy
$$ \begin{align*}(\partial_{\nu^+} \phi_t)^2(x_1)  (\partial_{\nu^} \phi_t)^2(x_1) \geq q^2_+(x_1)  q^2_(x_1)\end{align*} $$(respectively $\leq q^2_+  q^2_$ )

5. $\phi _b \leq v$ in $\overline {B}$ (respectively $\phi _b\geq v$ ).

6. For all $\rho \in [a,b]$ , $\phi _\rho \leq v$ on $\partial B$ and $\phi _\rho < v$ on $\partial B \cap \overline {\{v \neq 0\}}$ (respectively with the inequalities reversed),
then we have that $\phi _a \leq v$ in $\overline {B}$ (respectively $\phi _a \geq v$ in $\overline {B}$ ).
Proof of Lemma 5.3, assuming Lemma 5.4
We first check the almostminimization condition (5.3) for $x_0 \in \Gamma ^+ \cap \Gamma ^$ :
Case 1. Let $x_0 \in \Gamma ^+\cap \Gamma ^$ and $r> 0$ small enough, and let $v =u$ on $B(0, R)\backslash B(x_0,r)$ .
Since $u^{\pm } \in C^{1,\alpha }(\overline {\Omega ^{\pm }})$ and $\Gamma ^+\cap \Gamma ^$ is closed and smooth, there exists an $r_0$ such that if $r < r_0$ , then, for $x\in B(x_0, r)$
Recall that $q_\pm (x_0) = \nabla u^\pm (x_0)$ by definition. To give more detail, equation (5.4) follows from the Taylor series expansion of $u^{\pm }$ at the point $x_0$ , where we have used the fact that $\Gamma ^\pm $ share a unit normal, e, at $x_0$ , which we take to point into the set $\{u> 0\}$ .
Recall the functional proved to be monotone by Weiss [Reference Weiss31]:
It follows from equation (5.4) and the $C^{0,\alpha }$ character of $\nabla u^{\pm }$ that
where $C> 0$ depends on the $C^{0,\alpha }$ norm of $\nabla u^\pm $ restricted to $\overline {\Omega }^{\pm }$ .
We now want to show that for any minimizer v to $J_{B(x_0, r)}$ with $v = u$ in $\mathbb R^n \backslash B(x_0, r)$ we have
Assume that equation (5.6) holds. We would like to compare $W(v, x_0, r)$ to $W(u, x_0, r)$ but naïvely underestimating $W(v, x_0, r)$ by $W(v, x_0, 0)$ is problematic as $x_0$ may not be in the free boundary of v. To combat this, let $x^+$ be the closest point to $x_0$ in $\partial \{v> 0\}$ and $x^$ the closest point to $x_0$ in $\partial \{v < 0\}$ . We note that equation (5.6) implies that $x_0  x^{\pm } < r^{1+\alpha /2}$ . Let $\rho = r/2  \max \{ x_+x_0, x_ x_0\}$ . Note that $B(x^{\pm }, 2\rho ) \subset B(x_0, r)$ and $r> 2\rho > r(1r^{\alpha /2})$ . Also, $B(x_0, r/2) \supset B(x^{\pm }, \rho )$ . Hence,
because $2\rho /r  1 < r^{\alpha /2}$ and v is Lipschitz in $B(x_0, r/2)$ with a constant controlled by $\frac {1}{r^n}\int _{B(x_0, r)} \nabla v^2 \leq \frac {1}{r^n} J_{B(x_0, r)}(u)$ , the latter of which is bounded by the Lipschitz norm of u and the supremums of $q_{\pm }$ .
To estimate each term in the summand on equation (5.7), we think of $v^+, v^$ is being separate critical points to the onephase problems associated to weights $\tilde {q}_{\pm }_{\partial \{v^{\pm }> 0\}} := \partial _{\nu ^{\pm }} v^\pm _{\partial \{v^{\pm } > 0\}}$ . By the regularity theory of the twophase problem in [Reference De Philippis, Spolaor and Velichkov20] and equation (5.6), we know that $\partial \{\pm v> 0\}$ are $C^{1,\alpha }$ in $B(x_0, r/2)$ . Thus, $\tilde {q}_{\pm } \in C^{0,\alpha }(\partial \{v^{\pm }> 0\})$ and can be extended Hölder continuously to all of $B(x_0, r/2)$ (with norm uniform in the constants we care about).
The free boundary condition satisfied by v being a minimizer to the twophase problem tells us that $\tilde {q}_{\pm }:= \partial _{\nu ^{\pm }} v^{\pm } = q_{\pm }$ at onephase points, but at twophase points the free boundary condition only implies that $\tilde {q}_{\pm }:= \partial _{\nu ^{\pm }} v^{\pm } \geq q_{\pm }$ . These observations tell us that $\\tilde {q}_{\pm }  q_{\pm }\_{L^\infty (B(x_0, r/2))} \leq Cr^{\alpha }$ .
By monotonicity, we can underestimate each
Putting this back into equation (5.7), using monotonicity and equation (5.5) we get that
Multiplying through by $r^n$ gives the almostminimization inequality.
So to finish Case 1, it suffices to prove equation (5.6). Here is where we apply Lemma 5.4. We do this on the interval $[a,b] = [r^{1+\alpha /2}, 3r]$ (recall that we can take r small so that $3r < 1$ ). To simplify notation, let us assume that $x_0 = 0$ and $e = e_n$ . We then create the family of sub/supersolutions to the twophase problem in $B(x_0,r)$ defined by
where $m_\pm = \min _{B(x_0, r)} q_\pm $ and $M_\pm = \max _{B(x_0, r)} q_{\pm }$ . Let v be a minimizer to the twophase problem in $B(x_0, r)$ with boundary values u.
We first verify condition (5) in Lemma 5.4. We note that for any $x\in B(x_0, r)$