Hostname: page-component-77c89778f8-n9wrp Total loading time: 0 Render date: 2024-07-17T11:54:53.700Z Has data issue: false hasContentIssue false

Adjoint-based interfacial control of viscous drops

Published online by Cambridge University Press:  29 January 2021

Alexandru Fikl*
Affiliation:
Department of Aerospace Engineering, University of Illinois at Urbana-Champaign, Urbana, IL61801, USA
Daniel J. Bodony
Affiliation:
Department of Aerospace Engineering, University of Illinois at Urbana-Champaign, Urbana, IL61801, USA
*
Email address for correspondence: fikl2@illinois.edu

Abstract

We develop a continuous adjoint formulation for controlling the deformation of clean, neutrally buoyant droplets in Stokes flow. The focus is on surface tension-driven flows where the interface is deformed by the local fluid velocity. We apply results from shape optimization to rigorously derive the optimality conditions for a range of interfacial problems. In the cases of interest, we make use of boundary integral methods as a natural choice for the numerical discretization of the flow variables. In the static case, our methodology is tested on a tracking-type cost functional and corresponds to classic shape optimization problems. We show agreement with black-box finite difference-based gradients and accurate minimization of the cost functionals. Finally, we demonstrate the methodology on the control of unsteady droplet deformation through external forcing.

Type
JFM Papers
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2021. Published by Cambridge University Press

1. Introduction

In this article we develop and apply an adjoint-based approach for controlling incompressible two-phase flows with sharp interfaces governed by the Stokes equations. The main model we consider is that of

(1.1)\begin{equation} \left.\begin{array}{c@{}} \boldsymbol{\nabla} \boldsymbol{\cdot} \boldsymbol{u}_\pm = 0, \quad \boldsymbol{x} \in \varOmega_\pm \\ \boldsymbol{\nabla} \boldsymbol{\cdot} \sigma_\pm = 0, \quad \boldsymbol{x} \in \varOmega_\pm \end{array}\right\}, \end{equation}

where a discontinuity in traction forms at the interface due to (a possibly non-constant) surface tension.

While a significant simplification of many practical flows, this model shares important features prevalent in more complex multiphase flows, such as discontinuous flow quantities (here pressure and velocity gradients) and the presence of higher-order surface properties such as curvature. Understanding these features and their effect on optimization problems permits optimization of more realistic fluids at higher Reynolds numbers. Common extensions to this model are important to many engineering and industrial applications, such as the interactions of water and oil in reservoirs (see Pau et al. Reference Pau, Almgren, Bell and Lijewski2009), air and fuel in combustion engines (see Desjardins, Moureau & Pitsch Reference Desjardins, Moureau and Pitsch2008), water and air in sprinklers (see Hua et al. Reference Hua, Kumar, Khoo and Xue2002), etc. Developing a robust and flexible control and optimization framework will enable improved efficiency in these systems.

The optimal control of single-phase fluid problems has been extensively studied, starting with the pioneering work of Pironneau (Reference Pironneau1974) and Jameson (Reference Jameson1988) in the context of aerodynamic design. However, there are only a few contributions to two-phase flows. Extending single-phase results to two-phase flows has generally been difficult due to the inclusion of the geometry as a problem unknown and the ensuing discontinuities in flow quantities. In the sharp interface limit, the fluid–fluid interface acts similarly to a domain boundary and, thus, the optimal control of two-phase flows shares many features in common with shape optimization. Therefore, the shape calculus for shape optimization developed in Delfour & Zolésio (Reference Delfour and Zolésio2001) and later, extended to time-dependent domains in Moubachir & Zolésio (Reference Moubachir and Zolésio2006) plays a central role. To efficiently perform optimal control of partial-differential-equation-constrained systems, the preferred approach is generally through the adjoint method based on the Lagrange multiplier theory. A major downside of the method is that it is purely formal, in that it assumes the required differentiability of the problem variables, so care must be taken when applying it directly to problems with discontinuous solutions. In the case of incompressible two-phase flows with a sharp interface, a commonly used formulation is the so-called one-fluid model (see Prosperetti & Tryggvason Reference Prosperetti and Tryggvason2009). In this model, the velocity and pressure are considered as variables over the whole domain, e.g.

(1.2)\begin{equation} \left.\begin{array}{c@{}} \boldsymbol{u} \triangleq \alpha \boldsymbol{u}_+ + (1 - \alpha) \boldsymbol{u}_-, \\ p\triangleq \alpha p_+ + (1 - \alpha) p_-, \end{array}\right\} \end{equation}

where $\alpha$ is an indicator function. However, recent work on similar systems with discontinuous coefficients, such as Pantz (Reference Pantz2005) for a heat equation and Allaire, Jouve & van Goethem (Reference Allaire, Jouve and van Goethem2011) for linear elasticity, has shown that the bulk variables are not differentiable with respect to the domain, i.e. not shape differentiable. The present manuscript considers the restrictions $(\boldsymbol {u}_\pm , p_\pm )$ separately and develops an adjoint method for the coupled system, instead of a single-equation system like the one-fluid model.

The novelty of our contribution lies in extending and verifying the methodology proposed by Pantz (Reference Pantz2005) to models similar to the above surface tension-driven two-phase flow. In particular, we derive the adjoint equations and adjoint-based gradient for static and quasi-static two-phase Stokes systems, which have been successfully used in many studies of droplet stability and breakup (see Stone & Leal Reference Stone and Leal1989; Pozrikidis Reference Pozrikidis1990; Hou, Lowengrub & Shelley Reference Hou, Lowengrub and Shelley1994; Stone Reference Stone1994; Lac & Homsy Reference Lac and Homsy2007; Ojala & Tornberg Reference Ojala and Tornberg2015) and provide a practical set-up for advancing the control of multiphase flows. Furthermore, the use of shape calculus allows for deriving an adjoint-based gradient independent of the representation of the interface. From the continuous adjoint perspective, the smoothness of the resulting flow and control parameters is also very important. For example, smooth solutions to the Stokes problem require a smooth curvature, but this is not a priori guaranteed by the optimization procedure. For the forward problem, several time-varying interfacial models have been recently studied in Pruss & Simonett (Reference Pruss and Simonett2016) and shown to be well-posed, paving the way to similar results for the adjoint problem. The smoothness requirement on the interface and curvature also poses problems numerically. For example, the widely used volume of fluid (known as VOF) method requires substantial effort to provide an accurate representation of the curvature (see Popinet Reference Popinet2009). Our numerical discretization relies on boundary element methods (see Pozrikidis Reference Pozrikidis1992) to provide accurate representations of all quantities of interest. In particular, since the interface is expressed explicitly, we can compute accurate geometric quantities. The use of boundary element methods in shape optimization problems also has a long history, as found in classic monographs such as Aliabadi (Reference Aliabadi2002) and in recent applications from Alouges, Desimone & Heltai (Reference Alouges, Desimone and Heltai2011), Bonnet, Liu & Veerapaneni (Reference Bonnet, Liu and Veerapaneni2020) and others.

The current work can also provide a starting point for the adjoint-based optimization of higher Reynolds number multiphase flows governed by the incompressible Navier–Stokes equations. However, the flow nonlinearity and interface breakup will require changes to the formulation. Nonetheless, the present work can be viewed as a verification formulation as well as a predictive approach for droplets in the Stokes regime. Note that optimal control of multiphase Navier–Stokes systems has been approached in the past with various models. For example, in Feppon et al. (Reference Feppon, Allaire, Bordeu, Cortial and Dapogny2019) the authors derive a coupled thermal fluid-structure model and in Deng, Liu & Wu (Reference Deng, Liu and Wu2017) and Garcke et al. (Reference Garcke, Hinze, Kahle and Lam2018) a phase field model was used to describe the interface. Phase-field models have seen the most development in this area, due in part to the fact that solutions are quickly varying but continuous and allow for a standard approach through adjoint-based methods. In contrast, the current method requires the more complex shape calculus to handle the sharp discontinuities explicitly, but the resulting gradients are not quickly varying and the optimization is shown to be well behaved. Other multiphase applications include the optimal control of a two-phase Stefan problem with level set methods in Bernauer & Herzog (Reference Bernauer and Herzog2011), free-surface Stokes problems in Repke, Marheineke & Pinnau (Reference Repke, Marheineke and Pinnau2011), Palacios, Alonso & Jameson (Reference Palacios, Alonso and Jameson2012) and Sellier (Reference Sellier2016), etc. To the authors’ knowledge, no previous work has incorporated sharp interfaces and surface tension in a fluid optimal control problem.

The paper is organized as follows. We begin by introducing the required notation, fluid model and notions of boundary perturbations in § 2. In § 3, we define the optimization problems of interest and derive the adjoint equations and the gradients required for the optimization using gradient descent-type methods. The discretization details are presented in § 4. The discretization and numerical results are focused on a single droplet under axisymmetric assumptions. The presented results are then numerically verified in § 5 for several test problems. Finally, concluding remarks are given in § 6.

2. Preliminaries

2.1. Two-phase Stokes system

We are interested in the quasi-static evolution of multiple clean droplets in an incompressible flow. The droplets experience no phase change, but are driven by a constant surface tension at the fluid–fluid interface. Such a system is governed by the two-phase Stokes equations, which have been described in detail in classic monographs (see Ladyzhenskaya Reference Ladyzhenskaya1963; Pozrikidis Reference Pozrikidis1992; Temam Reference Temam2001). The notation and main auxiliary results used below are presented in appendix A.

We consider a three-dimensional domain $\varOmega _- \subset \mathbb {R}^d, d = 3$, and its complement $\varOmega _+ = \mathbb {R}^d \setminus \bar {\varOmega }_-$ to denote the union of the droplets’ interiors and the ambient fluid, respectively. The boundary between the two domains is $\varSigma \triangleq \bar {\varOmega }_- \cap \bar {\varOmega }_+$. In the following, we assume $\varSigma$ is a finite set of disjoint, closed, bounded and orientable surfaces. Furthermore, we require that $\varSigma$ is of class $\mathcal {C}^2$ to allow for the definition of a unique curvature at every point.

The droplets and the ambient fluid have dynamic viscosities $\mu _-$ and $\mu _+$, respectively. The flow in each phase is described by the velocity fields $\boldsymbol {u}_\pm : \varOmega _\pm \to \mathbb {R}^d$ and the pressures $p_\pm : \varOmega _\pm \to \mathbb {R}$. We also define the Cauchy stress tensor and the rate-of-strain tensor by

(2.1)\begin{equation} \left.\begin{array}{c@{}} \sigma_\pm \triangleq -p_\pm {\boldsymbol{\mathsf{I}}} + 2 \mu_\pm \varepsilon_\pm, \\ \varepsilon_\pm \triangleq \tfrac{1}{2} (\boldsymbol{\nabla} \boldsymbol{u}_\pm + (\boldsymbol{\nabla} \boldsymbol{u}_\pm)^T). \end{array}\right\} \end{equation}

Following these definitions, the static incompressible two-phase Stokes equations are given by

(2.2)\begin{equation} \left.\begin{array}{c@{}} \boldsymbol{\nabla} \boldsymbol{\cdot} \boldsymbol{u}_\pm = 0, \quad \boldsymbol{x} \in \varOmega_\pm, \\ \boldsymbol{\nabla} \boldsymbol{\cdot} \sigma_\pm = 0, \quad \boldsymbol{x} \in \varOmega_\pm, \\ \boldsymbol{u}_+ = \boldsymbol{u}^\infty, \quad \|\boldsymbol{x}\| \to \infty. \end{array}\right\} \end{equation}

The boundary conditions at the interface between the fluids are given as a set of jump conditions on $\varSigma$. Namely, the velocity must satisfy a no-slip condition and the surface stresses are related to the additive curvature of the interface by the Young–Laplace law. We write

(2.3)\begin{equation} \left.\begin{array}{c@{}} {[\![ \boldsymbol{u} ]\!]} = 0, \\ {[\![\, \boldsymbol{f} ]\!]} = \gamma \kappa \boldsymbol{n},\end{array}\right\} \end{equation}

where $\boldsymbol {f}_\pm \triangleq \boldsymbol {n} \boldsymbol {\cdot } \sigma _\pm$ is the surface traction, $\gamma$ is the surface tension coefficient and $\boldsymbol {n}$ is the exterior normal vector to $\varOmega _-$. The jump operator is defined as $[\![\, f ]\!] \triangleq f_+(\boldsymbol {x}_+) - f_-(\boldsymbol {x}_-)$, where

(2.4)\begin{equation} f(\boldsymbol{x}_\pm) = \lim_{\epsilon \to 0^+} \boldsymbol{f}(\boldsymbol{x} \pm \epsilon \boldsymbol{n}). \end{equation}

It is convenient to non-dimensionalize the equations to reduce the number of parameters in the problem. We choose the following non-dimensionalizations, common in the literature:

(2.5ac)\begin{equation} \boldsymbol{u}_\pm = U \hat{\boldsymbol{u}}_\pm, \quad p_\pm = \frac{\mu_\pm U}{R} \hat{p}_\pm, \quad \boldsymbol{x} = R \hat{\boldsymbol{x}}, \end{equation}

where $U$ is a characteristic velocity magnitude and $R$ is a characteristic droplet radius. The resulting non-dimensional parameters governing the systems are the viscosity ratio $\lambda$ and the capillary number $\textit {Ca}$, defined as

(2.6a,b)\begin{equation} \lambda \triangleq \frac{\mu_-}{\mu_+} \quad \text{and} \quad \textit{Ca} \triangleq \frac{\mu_+ U}{\gamma}. \end{equation}

In the remainder, we will continue by using the non-dimensional equations. We will write the non-dimensional stress jump condition (2.3) as follows:

(2.7)\begin{equation} [\![\, \boldsymbol{f} ]\!] _\lambda \triangleq \boldsymbol{f}_+ - \lambda \boldsymbol{f}_- = \frac{1}{\textit{Ca}} \kappa \boldsymbol{n}. \end{equation}

In order to evolve the shape of the droplets in time, we use a quasi-static approach in which we first compute the velocity using (2.2) and then displace the interface using an ordinary differential equation. The evolution of the interface is described by

(2.8)\begin{equation} \dfrac{\partial \boldsymbol{X}}{\partial \tau} = \boldsymbol{V}(\boldsymbol{u}, \boldsymbol{X}), \end{equation}

where $\boldsymbol {X}$ is a parameterization of the interface and $\boldsymbol {V}$ is a given source term. Typical examples of the source term are

(2.9)\begin{equation} \boldsymbol{V}(\boldsymbol{u}, \boldsymbol{X}) \triangleq (\boldsymbol{u} \boldsymbol{\cdot} \boldsymbol{n}) \boldsymbol{n} + ({\boldsymbol{\mathsf{I}}} - \boldsymbol{n} \otimes \boldsymbol{n}) \boldsymbol{w}, \end{equation}

where $\boldsymbol {w}$ can be chosen to be $\boldsymbol {0}, \boldsymbol {u}$ or a different tangential correction to $\boldsymbol {u}$. The choice of $\boldsymbol {w}$ is left open, since only the normal component of the velocity field governs the deformation of the interface. The tangential component can be artificially imposed for numerical reasons, e.g. to cluster points in regions of high curvature; for example, see the works of Hou et al. (Reference Hou, Lowengrub and Shelley1994) and Ojala & Tornberg (Reference Ojala and Tornberg2015).

2.2. Shape derivatives

Interfacial control in multiphase flows with sharp interfaces is inherently a form of shape optimization. With this in mind, we will review some of the concepts behind shape optimization and perturbations with respect to the domain. We consider here Hadamard's method for boundary variations, which has been detailed rigorously in works such as Moubachir & Zolésio (Reference Moubachir and Zolésio2006), Allaire & Schoenauer (Reference Allaire and Schoenauer2007), Delfour & Zolésio (Reference Delfour and Zolésio2001) and, from the view of differential geometry, in Walker (Reference Walker2015).

The method is based on defining so-called perturbations of the identity by

(2.10)\begin{equation} \varOmega_\epsilon = \boldsymbol{X}_\epsilon(\varOmega) = (\mathrm{Id} + \epsilon \tilde{\boldsymbol{X}})(\varOmega), \end{equation}

where $\mathrm {Id}$ is the identity mapping, $\tilde {\boldsymbol {X}} \in W^{2, \infty }(\mathbb {R}^d; \mathbb {R}^d)$ is a perturbation and $\epsilon > 0$ is a real constant. If $\epsilon$ is sufficiently small, then $\boldsymbol {X}_\epsilon$ is a diffeomorphism on $\mathbb {R}^d$ (Allaire & Schoenauer Reference Allaire and Schoenauer2007, lemma 6.13). As such, by Hadamard's method we can identify every admissible shape $\varOmega _\epsilon$ with a perturbation $\boldsymbol {X}_\epsilon$. Therefore, the method allows, among other things, defining differentiation of shapes by differentiation in the Banach space $W^{2, \infty }(\mathbb {R}^d; \mathbb {R}^d)$ in terms of well known concepts, such as the Gâteaux and Fréchet derivatives.

In fact, using this simple definition, we can define the Fréchet derivatives of relevant functionals of the type

(2.11)\begin{equation} \left.\begin{array}{c@{}} \displaystyle\mathcal{J}_1(\varOmega) \triangleq \int_{\varOmega} f(\boldsymbol{x}; \varOmega)\, \textrm{d} V, \\ \displaystyle\mathcal{J}_2(\varSigma) \triangleq \int_{\varSigma} f(\boldsymbol{x}; \varOmega) \,{\rm d}S, \\ \displaystyle\mathcal{J}_3(\varSigma) \triangleq \int_{\varSigma} g(\boldsymbol{x}; \varSigma) \,{\rm d}S. \end{array}\right\} \end{equation}

We refer to Allaire & Schoenauer (Reference Allaire and Schoenauer2007) and Walker (Reference Walker2015) for proofs of these results and additional shape derivatives of quantities of interest, such as the normal and the additive curvature. The lemma below reproduces the main derivatives of functionals used in the current work.

Lemma 2.1 ( Walker Reference Walker2015, lemma 5.7)

Let $\tilde {\boldsymbol {X}} \in W^{2, \infty }(\mathbb {R}^d; \mathbb {R}^d)$ be a sufficiently small perturbation, $\varOmega \subset \mathbb {R}^d$ and $\varSigma \subset \mathbb {R}^d$ a $d - 1$ manifold without boundary. Then, we have that the shape derivatives of the functionals (2.11) are

(2.12)\begin{equation} \left.\begin{array}{c@{}} \displaystyle D \mathcal{J}_1[\tilde{\boldsymbol{X}}] = \int_\varOmega f'[\tilde{\boldsymbol{X}}] + \int_{\partial \varOmega} f \tilde{\boldsymbol{X}} \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S, \\ \displaystyle D \mathcal{J}_2[\tilde{\boldsymbol{X}}] = \int_\varSigma f'[\tilde{\boldsymbol{X}}] + \int_{\varSigma} (\boldsymbol{\nabla} f \boldsymbol{\cdot} \boldsymbol{n} + \kappa f) \tilde{\boldsymbol{X}} \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S, \\ \displaystyle D \mathcal{J}_3[\tilde{\boldsymbol{X}}] = \int_\varSigma g'[\tilde{\boldsymbol{X}}] + \int_{\varSigma} \kappa g \tilde{\boldsymbol{X}} \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S. \end{array}\right\} \end{equation}

Extensions to moving domains $\varOmega (t)$ and $\varSigma (t)$ are presented in Moubachir & Zolésio (Reference Moubachir and Zolésio2006). An important result in shape optimization is the Hadamard–Zolésio structure theorem from Delfour & Zolésio (Reference Delfour and Zolésio2001, theorem 3.6, chapter 9), which guarantees that we can always express the first-order shape derivative of functionals (2.11) as boundary integrals that only depend on the normal component of $\tilde {\boldsymbol {X}}$. Restricting the shape gradient to the boundary of a domain $\varOmega$ is of special interest to us, since it greatly simplifies the numerical aspects. In particular, it allows restricting the whole problem to a problem on the interface $\varSigma$ between the droplets and the surrounding fluid.

Finally, we will informally consider the shape differentiability of surface functionals that depend on the problem variables $p_\pm , \boldsymbol {u}_\pm$ and $\boldsymbol {X}$. The main issue, also discussed in Pantz (Reference Pantz2005) and Allaire et al. (Reference Allaire, Jouve and van Goethem2011), is that $\boldsymbol {u}$ as a function on $\varOmega$ is not shape differentiable, but $\boldsymbol {u}_\pm$ as restrictions to $\varOmega _\pm$ are shape differentiable.

Recall that, for $\textit {Ca} < \infty$, there is always a non-zero jump in pressure

(2.13)\begin{equation} [\![ p ]\!] = (1 - \lambda) (\boldsymbol{n} \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{u}) \boldsymbol{\cdot} \boldsymbol{n} + \frac{1}{\textit{Ca}} \kappa, \end{equation}

and the following jumps in the components of the velocity gradient

(2.14ac)\begin{equation} [\![ \boldsymbol{t}^\alpha \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{u} ]\!] = 0, \quad [\![ \boldsymbol{n} \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{u} ]\!] \ne 0 \quad \text{and} \quad [\![ (\boldsymbol{n} \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{u}) \boldsymbol{\cdot} \boldsymbol{n} ]\!] = 0, \end{equation}

where $\boldsymbol {t}^\alpha$, for $\alpha \in \{1, \ldots , d - 1\}$, is an orthonormal basis for the tangent space to $\varSigma$. Similar jump relations can be derived for the rate-of-strain tensor. With this in mind, we consider the general functional

(2.15)\begin{equation} \mathcal{J} = \int_\varSigma j(p_\pm, \boldsymbol{u}, \boldsymbol{X}) \,{\rm d}S. \end{equation}

We can see from lemma 2.1 that the integrand must be differentiable up to the boundary to allow defining $\boldsymbol {\nabla } j \boldsymbol {\cdot } \boldsymbol {n}$. In the following, we will restrict to differentiable functionals of the type

(2.16)\begin{equation} \mathcal{J} = \int_\varSigma j(\boldsymbol{u} \boldsymbol{\cdot} \boldsymbol{n}, \boldsymbol{X}) \,{\rm d}S, \end{equation}

i.e. that only depend on the normal component of the velocity field and the surface parameterization, both of which have continuous normal gradients. In the case of volume integrals, we would not need further restrictions on the cost functionals we consider.

3. Optimization problem

We now define two constrained optimization problems that we will attempt to characterize. In particular, we will use the Lagrange multiplier theory to determine adjoint equations and derive an expression for first-order variations of the cost functionals of interest.

Firstly, we define a classic shape optimization problem

(P1)\[ \left.\begin{array}{c@{}} \displaystyle \mathop{min}\limits_{\boldsymbol{X} \in \mathcal{U}_1} \mathscr{J}_1, \\ \mathscr{E}_1(\boldsymbol{u}_k, p_k, \boldsymbol{X}) = 0, \end{array}\right\} \]

where $\mathcal {U}_1$ is the set of all admissible parameterizations of interfaces $\varSigma \subset \mathbb {R}^d$ and $\mathscr {E}_1$ corresponds to the constraints (2.2) and (2.3). In general, we require that the fluid–fluid interfaces remain of class $\mathcal {C}^2$.

Remark 3.1 Note that the parameterizations of a surface $\varSigma$ are not unique. To rigorously define the set $\mathcal {U}_1$, we would be required to define an appropriate notion of equivalence classes (Delfour & Zolésio Reference Delfour and Zolésio2001).

Secondly, we consider the optimization with respect to a parameter of the problem, namely the capillary number $\textit {Ca}$. This problem is defined by

(P2)\[ \left.\begin{array}{c@{}} \displaystyle \mathop{min}\limits_{\textit{Ca} \in \mathcal{U}_2} \mathscr{J}_2, \\ \mathscr{E}_2(\boldsymbol{u}_k, p_k, \boldsymbol{X}, \textit{Ca}) = 0, \end{array}\right\} \]

where $\mathcal {U}_2 \triangleq \mathbb {R}_+$ and $\mathscr {E}_2$ encodes the constraints (2.2), (2.3) and (2.8). At least conceptually, we can see that the second problem is a superset of the first one, since it simply includes an additional constraint. For this reason, we will focus our attention on the first problem, with extensions to the second problem as necessary.

The two optimization problems use tracking-type cost functionals restricted to the interface $\varSigma$. They are

(3.1)\begin{equation} \left.\begin{array}{c@{}} \displaystyle\mathscr{J}_1(\boldsymbol{X}) \triangleq \mathcal{J}_1(\boldsymbol{X}, \boldsymbol{u}(\boldsymbol{X})) = \tfrac{1}{2} \int_\varSigma (\boldsymbol{u} \boldsymbol{\cdot} \boldsymbol{n} - u_d)^2 \, {\rm d} S, \\ \displaystyle\mathscr{J}_2(\textit{Ca}) \triangleq \mathcal{J}_2(\textit{Ca}, \boldsymbol{X}(\textit{Ca})) = \tfrac{1}{2} \int_\varSigma \|\boldsymbol{X}(T) - \boldsymbol{X}_d\|^2 \, {\rm d} S, \end{array}\right\} \end{equation}

where $u_d$ and $\boldsymbol {X}_d$ represent a desired surface normal velocity field and geometry. We differentiate between $\mathscr {J}_k$, which is a function of the control only, and $\mathcal {J}_k$, which is a function of the control and the state variables (as required). The equivalence between the two, through the corresponding constraints $\mathscr {E}_k$, is given by the implicit function theorem under some assumptions. The main theorem that is used in the theory of equality constrained optimization is stated below.

Theorem 3.2 Let $\mathscr {J}: X \to \mathbb {R}$ be a functional and $\mathscr {E}: X \to Z$ be a set of constraints, where both $X$ and $Z$ are Banach spaces. Let $x_0$ be a local extremum of $\mathscr {J}$ and a regular point of $\mathscr {E}$. If both $\mathscr {J}$ and $\mathscr {E}$ are continuously Fréchet differentiable in a neighbourhood $U$ of $x_0$, then there exists a point $z_0^* \in Z^*$, such that the Lagrangian functional

(3.2)\begin{equation} \mathscr{L}(x, z^*) \triangleq \mathscr{J}(x) - \langle z^*, \mathscr{E}(x) \rangle, \end{equation}

is stationary at $(z^*_0, x_0)$, i.e. the Fréchet derivative satisfies

(3.3)\begin{equation} \dfrac{\partial \mathscr{L}}{\partial x} (x_0, z^*_0) = D \mathscr{J}(x_0) - \langle z^*_0, D \mathscr{E}(x_0) \rangle = 0. \end{equation}

A proof of the Lagrange multiplier theorem above can be found in Luenberger (Reference Luenberger1997). For a control problem, we have that the variable $x \triangleq (y, u)$, where $y$ are the state variables and $u$ is the control. In this case, the statement that $(x_0, z^*_0)$ is a stationary point of the Lagrangian $\mathscr {L}$ can be split into the following equations:

(3.4)\begin{equation} \left.\begin{array}{c@{}} \dfrac{\partial \mathscr{L}}{\partial z^*} = 0 \implies \mathscr{E}(x) = 0, \quad \text{(state equations)} \\ \dfrac{\partial \mathscr{L}}{\partial y} = 0 \implies \left(\dfrac{\partial \mathscr{E}}{\partial x} \right)^*[z^*] = \dfrac{\partial \mathcal{J}}{\partial x}, \quad \text{(adjoint equations)} \\ \dfrac{\partial \mathscr{L}}{\partial u} = 0 \implies \dfrac{\mathrm{d} \mathscr{J}}{\mathrm{d} u} = \dfrac{\partial \mathcal{J}}{\partial u} - \left(\dfrac{\partial \mathscr{E}}{\partial u} \right)^*[z^*] = 0, \quad \text{(optimality condition)} \end{array}\right\} \end{equation}

which are valid for all directions $h$ in the necessary spaces. In the following we will show the specific Lagrangian functionals for our problems of interest and the equations satisfied by the adjoint variables.

Remark. Rigorous proofs of existence and uniqueness of solutions to (P1) and (P2) are out of scope for this work. We refer to the seminal work of Ladyzhenskaya (Reference Ladyzhenskaya1963) on smooth domains for results related to the static problem (2.2) and (2.3). Extensions to the unsteady problem can be found in Denisova & Solonnikov (Reference Denisova and Solonnikov1991) and subsequent work. More recently, such problems have been investigated in Kunisch & Lu (Reference Kunisch and Lu2011) and Pruss & Simonett (Reference Pruss and Simonett2016).

3.1. Static problem

We start by looking at the optimization problem (P1) with the static constraints (2.2) and (2.3). The resulting Lagrangian can be written as

(3.5)\begin{equation} \mathscr{L}_1 \triangleq \mathscr{J}_1 - \langle \boldsymbol{z}^*, \mathscr{E}_1(\boldsymbol{w}, \boldsymbol{X}) \rangle, \end{equation}

where $\boldsymbol {z}^*$ are the adjoint variables and $\boldsymbol {w}_\pm \triangleq (\boldsymbol {u}_\pm , p_\pm )$ are the state variables on $\varOmega _\pm$, such that $\boldsymbol {w} \triangleq (\boldsymbol {w}_+, \boldsymbol {w}_-)$. Expanding the constraints, we write

(3.6)\begin{align} \mathscr{L}_1 &\triangleq \mathscr{J}_1 + \int_{\varOmega_+} \boldsymbol{\nabla} \boldsymbol{u}^*_+ \boldsymbol{\cdot} \sigma_+ \, \textrm{d} V - \int_{\varOmega_+} p^*_+ (\boldsymbol{\nabla} \boldsymbol{\cdot} \boldsymbol{u}_+)\, \textrm{d} V + \frac{1}{\textit{Ca}} \int_\varSigma \kappa \langle \boldsymbol{u}^* \rangle \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S \nonumber\\ &\quad + \int_{\varOmega_-} \lambda \boldsymbol{\nabla} \boldsymbol{u}^*_- \boldsymbol{\cdot} \sigma_- \, \textrm{d} V - \int_{\varOmega_-} \lambda p^*_- (\boldsymbol{\nabla} \boldsymbol{\cdot} \boldsymbol{u}_-)\, \textrm{d} V \nonumber\\ &\quad + \int_\varSigma \langle \,\boldsymbol{f} \rangle _\lambda \boldsymbol{\cdot} [\![ \boldsymbol{u}^* ]\!] \, {\rm d} S + \int_\varSigma \langle \,\boldsymbol{f}^* \rangle _\lambda \boldsymbol{\cdot} [\![ \boldsymbol{u} ]\!] \, {\rm d} S, \end{align}

where $\langle \,f \rangle _\lambda$ denotes a weighted average, defined as

(3.7)\begin{equation} \langle \,f \rangle _\lambda \triangleq \tfrac{1}{2} (f_+ + \lambda f_-). \end{equation}

The weak form of the Stokes equations, (2.2), can be obtained by integration by parts and then applying the following equality:

(3.8)\begin{equation} [\![\, f g ]\!] _\lambda = [\![\, f ]\!] \langle g \rangle _\lambda + \langle \,f \rangle [\![ g ]\!] _\lambda. \end{equation}

Remark 3.3 The main subtlety in our derivation comes from the choice in constructing the Lagrangian. In classic single-phase works, such as Bewley, Moin & Temam (Reference Bewley, Moin and Temam2001) and Wei & Freund (Reference Wei and Freund2006), the strong form of the equations is used with an integral over the whole domain. In the context of multiphase flows, such an approach would be possible using the one-fluid model, where $(\boldsymbol {u}, p)$ are considered as bulk variables with the jump conditions carried over as singular source terms. However, such a choice results in an incorrect shape derivative. As discussed in Pantz (Reference Pantz2005), the reason for this is that the state variables are not shape differentiable over $\varOmega$.

Therefore, equations with discontinuous coefficients and, in particular, multiphase flows must be treated with care. The correct construction involves considering the restrictions to $\varOmega _\pm$, i.e. $(\boldsymbol {u}_\pm , p_\pm )$ as above, and enforcing values that are well-defined on the interface. For example, an alternative expansion of

(3.9)\begin{equation} [\![\, f g ]\!] _\lambda = f_+ [\![ g ]\!] _\lambda + \lambda [\![\, f ]\!] g_-, \end{equation}

would lead to ill-defined results. The choice of a weak form in (3.6) is not formally required, but allows us to directly apply the results from lemma 2.1 and is consistent with the methods used in the shape optimization community, e.g. in Allaire & Schoenauer (Reference Allaire and Schoenauer2007).

We start by stating the equations satisfied by the adjoint variables in the following result.

Lemma 3.4 At a stationary point of the Lagrangian (3.6), the adjoint variables $\boldsymbol {w}^*_\pm$ satisfy the following equations:

(3.10)\begin{equation} \left.\begin{array}{c@{}} \boldsymbol{\nabla} \boldsymbol{\cdot} \boldsymbol{u}^*_\pm = 0, \quad \boldsymbol{x} \in \varOmega_\pm, \\ \boldsymbol{\nabla} \boldsymbol{\cdot} \sigma^*_\pm = 0, \quad \boldsymbol{x} \in \varOmega_\pm, \\ \boldsymbol{u}^*_+ = 0, \quad \|\boldsymbol{x}\| \to \infty, \end{array}\right\} \end{equation}

with the jump conditions

(3.11)\begin{equation} \left.\begin{array}{c@{}} {[\![ \boldsymbol{u}^* ]\!]} = 0, \\ {[\![\, \boldsymbol{f}^* ]\!]} _\lambda = \boldsymbol{\nabla}_{\boldsymbol{u}} \mathcal{J}_1. \end{array}\right\} \end{equation}

The adjoint Cauchy stress tensor, rate-of-strain tensor and traction vector are in complete analogy to the definitions of the primal Stokes problem given in § 2. For the cost functional (3.1), we have that

(3.12)\begin{equation} \boldsymbol{\nabla} _{\boldsymbol{u}} \mathcal{J}_1 = (\boldsymbol{u} \boldsymbol{\cdot} \boldsymbol{n} - u_d) \boldsymbol{n}. \end{equation}

Proof. The adjoint equations are easily obtained by differentiating the Lagrangian with respect to the state variables $\boldsymbol {w}_\pm$. We have that

(3.13)\begin{align} \dfrac{\partial \mathscr{L}_1}{\partial \boldsymbol{w}} [\tilde{\boldsymbol{w}}] &= \dfrac{\partial \mathcal{J}_1}{\partial \boldsymbol{w}} [\tilde{\boldsymbol{w}}] + \int_{\varOmega_+} \boldsymbol{\nabla} \boldsymbol{u}^*_+ \boldsymbol{\cdot} \tilde{\sigma}_+ \, \textrm{d} V - \int_{\varOmega_+} p^*_+ (\boldsymbol{\nabla} \boldsymbol{\cdot} \tilde{\boldsymbol{u}}_+)\, \textrm{d} V \nonumber\\ &\quad + \int_{\varOmega_-} \lambda \boldsymbol{\nabla} \boldsymbol{u}^*_- \boldsymbol{\cdot} \tilde{\sigma}_- \, \textrm{d} V - \int_{\varOmega_-} \lambda p^*_- (\boldsymbol{\nabla} \boldsymbol{\cdot} \tilde{\boldsymbol{u}}_-)\, \textrm{d} V + \int_\varSigma \langle \,\tilde{\boldsymbol{f}} \rangle_\lambda \boldsymbol{\cdot} [\![ \boldsymbol{u}^* ]\!] \, {\rm d} S. \end{align}

For $k \in \{+, -\}$, we have that

(3.14)\begin{equation} \int_{\varOmega_k} \boldsymbol{\nabla} \boldsymbol{u}^*_k \boldsymbol{\cdot} \tilde{\sigma}_k \, \textrm{d} V - \int_{\varOmega_k} p^*_k (\boldsymbol{\nabla} \boldsymbol{\cdot} \tilde{\boldsymbol{u}}_k)\, \textrm{d} V \\ = \int_{\varOmega_k} \sigma^*_k \boldsymbol{\cdot} \boldsymbol{\nabla} \tilde{\boldsymbol{u}}_k \, \textrm{d} V - \int_{\varOmega_k} (\boldsymbol{\nabla} \boldsymbol{\cdot} \boldsymbol{u}^*_k) \tilde{p}_k \, \textrm{d} V, \end{equation}

by algebraic manipulations. This is the weak formulation of the adjoint equations from lemma 3.4 with test functions $(\tilde {\boldsymbol {u}}_\pm , \tilde {p}_\pm )$.

We are now in a position to give the definition of the shape derivative of the cost functional from (P1). It is stated as follows.

Theorem 3.5 We assume that $(\boldsymbol {u}_\pm , p_\pm )$ and $\mathscr {J}_1$ are shape differentiable. Then, the shape gradient of the cost functional (P1) is

(3.15)\begin{equation} \boldsymbol{\nabla} _{\boldsymbol{X}} \mathscr{J}_1 = \boldsymbol{\nabla} _{\boldsymbol{X}} \mathcal{J}_1 + \mathcal{C}^*[\boldsymbol{w}^*; \boldsymbol{X}] + \mathcal{S}^*[\boldsymbol{w}^*, \boldsymbol{X}, \boldsymbol{w}], \end{equation}

where

(3.16)\begin{equation} \left.\begin{array}{c@{}} \mathcal{C}^*[\boldsymbol{w}^*; \boldsymbol{X}] \triangleq \dfrac{1}{\textit{Ca}} ( \kappa^*[\boldsymbol{u}^* \boldsymbol{\cdot} \boldsymbol{n}] + n^*[\kappa \boldsymbol{u}^* \boldsymbol{\cdot} \boldsymbol{t}] + \kappa^2 \boldsymbol{u}^* \boldsymbol{\cdot} \boldsymbol{n} + \kappa (\boldsymbol{n} \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{u}^*) \boldsymbol{\cdot} \boldsymbol{n} ), \\ \mathcal{S}^*[\boldsymbol{w}^*; \boldsymbol{X}, \boldsymbol{w}] \triangleq \langle \,\boldsymbol{f} \rangle _\lambda \boldsymbol{\cdot} [\![ \boldsymbol{n} \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{u}^* ]\!] + \langle \,\boldsymbol{f}^* \rangle _\lambda \boldsymbol{\cdot} [\![ \boldsymbol{n} \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{u} ]\!] - 2 [\![ \varepsilon^* \boldsymbol{\cdot} \varepsilon ]\!] _\lambda. \end{array}\right\} \end{equation}

For the cost functional (3.1), we have that

(3.17)\begin{equation} \boldsymbol{\nabla} _{\boldsymbol{X}} \mathcal{J}_1 = n^*[\,j \boldsymbol{u}] + j (\boldsymbol{n} \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{u}) \boldsymbol{\cdot} \boldsymbol{n} + \tfrac{1}{2} \kappa j^2, \end{equation}

where $j \triangleq \boldsymbol {u} \boldsymbol {\cdot } \boldsymbol {n} - u_d$. The operators $\kappa ^*$ and $n^*$ correspond to the adjoint operators of the Eulerian shape derivative of the additive curvature and normal vector, respectively. They are defined in appendix B.

Proof. A detailed derivation of the shape gradient is given in appendix C. We note that both the derivation of the shape gradient in theorem 3.5 and the adjoint equations in lemma 3.4 are purely formal; they assume that the state variables have shape derivatives of the required regularity.

3.2. Quasi-static problem

The extension from (P1) to the unsteady problem (P2) is straightforward from the point of view of the Lagrangian formalism. However, introducing the unsteady element complicates the theoretical analysis of the problem, since it is not clear if the solution will maintain the required smoothness. Similar issues appear at a numerical level, as we will see in subsequent sections.

Furthermore, the shape derivative formalism from § 2.2 needs to be extended. Intuitively, we now want to perturb the time-dependent domains $\varOmega _\pm (t)$. In the $(d + 1)$-dimensional $(t, \boldsymbol {x})$ space, one can think of a fixed domain $\varOmega$ as a cylinder, while a moving domain $\varOmega (t)$ will be a general tubular manifold. Domain perturbations of this kind were made rigorous in Moubachir & Zolésio (Reference Moubachir and Zolésio2006), to which we refer to any additional details.

As before, the Lagrangian functional associated with (P2) is given by

(3.18)\begin{equation} \mathscr{L}_2 \triangleq \mathscr{J}_2 - \langle \boldsymbol{z}^*, \mathscr{E}_2(\boldsymbol{w}, \boldsymbol{X}, \textit{Ca}) \rangle, \end{equation}

or, explicitly,

(3.19)\begin{align} \mathscr{L}_2 &\triangleq \mathscr{J}_2 + \int_0^T \int_{\varOmega_+(t)} \boldsymbol{\nabla} \boldsymbol{u}^*_+ \boldsymbol{\cdot} \sigma_+ \, \textrm{d} V \, \textrm{d} t - \int_0^T \int_{\varOmega_+(t)} p^*_+ (\boldsymbol{\nabla} \boldsymbol{\cdot} \boldsymbol{u}_+)\, \textrm{d} V \, \textrm{d} t \nonumber\\ &\quad + \int_0^T \int_{\varOmega_-(t)} \lambda \boldsymbol{\nabla} \boldsymbol{u}^*_- \boldsymbol{\cdot} \sigma_- \, \textrm{d} V \, \textrm{d} t - \int_0^T \int_{\varOmega_-(t)} \lambda p^*_- (\boldsymbol{\nabla} \boldsymbol{\cdot} \boldsymbol{u}_-)\, \textrm{d} V \, \textrm{d} t \nonumber\\ &\quad + \frac{1}{\textit{Ca}} \int_0^T \int_{\varSigma(t)} \kappa \boldsymbol{u}^* \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S \, \textrm{d} t \nonumber\\ &\quad + \int_0^T \int_{\varSigma(t)} \langle \,\boldsymbol{f} \rangle _\lambda \boldsymbol{\cdot} [\![ \boldsymbol{u}^* ]\!] \, {\rm d} S \, \textrm{d} t + \int_0^T \int_{\varSigma(t)} \langle \,\boldsymbol{f}^* \rangle _\lambda \boldsymbol{\cdot} [\![ \boldsymbol{u} ]\!] \, {\rm d} S \, \textrm{d} t \nonumber\\ &\quad - \int_0^T \int_{\varSigma(t)} \boldsymbol{X}^* \boldsymbol{\cdot} (\boldsymbol{X}_t - \boldsymbol{V}(\boldsymbol{u}, \boldsymbol{X})) \,{\rm d}S \, \textrm{d} t \nonumber\\ & \quad -\int_{\varSigma(0)} \boldsymbol{X}^*_0 \boldsymbol{\cdot} (\boldsymbol{X}(0) - \boldsymbol{X}_0) \,{\rm d}S. \end{align}

Lemma 3.6 We assume that $\boldsymbol {w}_\pm$ and $\mathscr {J}_2$ are shape differentiable. Furthermore, the source term $\boldsymbol {V}$ in (2.8) is also assumed to be shape differentiable. Then, at a stationary point of the Lagrangian (3.19), the adjoint variables $\boldsymbol {w}^*_\pm$ satisfy the Stokes equations (2.2) with the jump conditions

(3.20)\begin{equation} \left.\begin{array}{c@{}} {[\![ \boldsymbol{u}^* ]\!]} = 0, \\ \displaystyle {[\![\, \boldsymbol{f}^* ]\!]} _\lambda = \boldsymbol{\nabla} _{\boldsymbol{u}} \mathcal{J}_2 + \left(\dfrac{\partial \boldsymbol{V}}{\partial \boldsymbol{u}} \right)^*[\boldsymbol{X}^*], \end{array}\right\} \end{equation}

where $\boldsymbol {X}^*$ is given by lemma 3.7. In the case of the cost functional from (3.1), we have that

(3.21)\begin{equation} \boldsymbol{\nabla} _{\boldsymbol{u}} \mathcal{J}_2 = 0. \end{equation}

Also, using the forcing term described in (2.9), we have that

(3.22)\begin{equation} \left(\dfrac{\partial \boldsymbol{V}}{\partial \boldsymbol{u}} \right)^*[\boldsymbol{X}^*] = (\boldsymbol{X}^* \boldsymbol{\cdot} \boldsymbol{n}) \boldsymbol{n}. \end{equation}

Proof. The proof is completely equivalent to that of lemma 3.4.

Lemma 3.7 Under the assumptions of lemma 3.6, we have that at a stationary point of the Lagrangian (3.19), the adjoint variable $\boldsymbol {X}^*$ satisfies the following evolution equation:

(3.23)\begin{equation} \left.\begin{array}{c@{}} \begin{aligned} -\boldsymbol{X}^*_t & = \{\boldsymbol{\nabla} _{\boldsymbol{X}} \mathcal{J}_2 + (\boldsymbol{\nabla} _\varSigma \boldsymbol{\cdot} \boldsymbol{V}) (\boldsymbol{X}^* \boldsymbol{\cdot} \boldsymbol{n}) + \mathcal{V}^*[\boldsymbol{X}^*; \boldsymbol{X}, \boldsymbol{w}] \\ & \quad + \mathcal{C}^*[\boldsymbol{w}^*; \boldsymbol{X}] + \mathcal{S}^*[\boldsymbol{w}^*; \boldsymbol{X}, \boldsymbol{w}]\} \boldsymbol{n}, \end{aligned}\\ \boldsymbol{X}^*(T) = \boldsymbol{\nabla} _{\boldsymbol{X}(T)} \mathcal{J}_2, \end{array}\right\} \end{equation}

where $\mathcal {C}^*$ and $\mathcal {S}^*$ are defined in lemma 3.4 and $\mathcal {V}^*$ is the adjoint operator corresponding to (2.8). In the case of the cost functional (3.1), we have that

(3.24)\begin{equation} \boldsymbol{\nabla} _{\boldsymbol{X}} \mathcal{J}_2 = (\boldsymbol{X} - \boldsymbol{X}_d) \boldsymbol{\cdot} \boldsymbol{n} + \frac{\kappa}{2} \|\boldsymbol{X} - \boldsymbol{X}_d\|^2. \end{equation}

Similarly, for the choice of forcing term described in (2.9), we have that

(3.25)\begin{equation} \mathcal{V}^*[\boldsymbol{X}^*; \boldsymbol{X}, \boldsymbol{w}] \triangleq n^*[(\boldsymbol{X}^* \boldsymbol{\cdot} \boldsymbol{n}) \boldsymbol{u}] + n^*[(\boldsymbol{u} \boldsymbol{\cdot} \boldsymbol{n}) \boldsymbol{X}^*] + ((\boldsymbol{n} \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{u}) \boldsymbol{\cdot} \boldsymbol{n}) (\boldsymbol{X}^* \boldsymbol{\cdot} \boldsymbol{n}). \end{equation}

Proof. The derivation of the above evolution equation is presented in appendix C. Determining the shape gradient of the cost functional (3.1) is a direct application of lemma 2.1.

Theorem 3.8 The gradient of the cost functional for (P2) is given by

(3.26)\begin{equation} \nabla_{\textit{Ca}} \mathscr{J}_2 = -\frac{1}{\textit{Ca}^2} \int_0^T \int_\varSigma \kappa \boldsymbol{u}^* \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S \, {\rm d} t. \end{equation}

Proof. The proof of theorem 3.8 follows from the definition of the Lagrangian given in (3.19). Note that the gradient is simply a classic derivative in $\mathbb {R}$ in this case.

This completes the statement of the two optimization problems we have proposed. In both cases, we have access to first-order variations that can be used as part of gradient descent algorithms to perform the optimization.

4. Numerical methods

In this section, we will detail the numerical methods used to solve the Stokes system (2.2) and (2.3) and the adjoint Stokes system described in lemma 3.4. Since the Stokes equations are symmetric, we require a single solver in both cases. We will also look into discretizations of the shape gradient in theorem 3.5, the interface evolution equation (2.8) and the corresponding adjoint evolution equation from lemma 3.7. The numerical methods described here have been implemented in an open-source code that can be found in Fikl (Reference Fikl2020).

4.1. Boundary integral equations

An efficient method of solving the Stokes equation is based on boundary integral equations (Pozrikidis Reference Pozrikidis1992). In our case, this is particularly useful because the entire problem reduces to the interface if the cost functional is expressed on $\varSigma$ alone, as is the case for the choices presented in (3.1).

Boundary integral methods are based on Green's function fundamental solutions to the Stokes problem. Using the fundamental solutions as kernels in a linear integral operator, there are many possible representations of the two-phase Stokes problem. A common representation is based on the Lorentz reciprocal theorem and has been used in studies of interfacial dynamics (see Pozrikidis Reference Pozrikidis1990; Stone Reference Stone1994; Lac & Homsy Reference Lac and Homsy2007). However, this representation is less suitable for our case, since the pressure and stress kernels are hypersingular. Hypersingular integrals are generally harder to analyse and accurately evaluate numerically.

A second representation for two-phase flows with a stress discontinuity has been discussed in Pozrikidis (Reference Pozrikidis1990) and is described in detail in Pozrikidis (Reference Pozrikidis1992, chapter 5.3). It is a so-called single-layer potential representation, with the velocity field described by, for $\boldsymbol {x} \notin \varSigma$,

(4.1)\begin{equation} u_i(\boldsymbol{x}) = \mathcal{G}_i[\boldsymbol{q}](\boldsymbol{x}) \triangleq \int_\varSigma {\mathsf{G}}_{ij}(\boldsymbol{x}, \boldsymbol{y}) q_j(\,\boldsymbol{y}) \,{\rm d}S, \end{equation}

where $\boldsymbol {q}: \varSigma \to \mathbb {R}^d$ and ${\mathsf{G}}_{ij}$ are usually referred to as the density and the Stokeslet, (4.4). This representation is complemented by definitions of the pressure and stress, for $\boldsymbol {x} \notin \varSigma$,

(4.2)\begin{gather} p_\pm(\boldsymbol{x}) = \mathcal{P}[\boldsymbol{q}](\boldsymbol{x}) \triangleq \int_\varSigma p_j(\boldsymbol{x}, \boldsymbol{y}) q_j(\,\boldsymbol{y}) \,{\rm d}S, \end{gather}
(4.3)\begin{gather}(\sigma_\pm)_{ik}(\boldsymbol{x}) = \mathcal{T}_{ik}[\boldsymbol{q}](\boldsymbol{x}) \triangleq \int_\varSigma {\mathsf{T}}_{ijk}(\boldsymbol{x}, \boldsymbol{y}) q_j(\,\boldsymbol{y}) \,{\rm d}S, \end{gather}

where both have been appropriately non-dimensionalized, as described in § 2.1. The usual summation convention over repeated indices has been used in the definitions. The fundamental solutions for the three main variables in Stokes flow are given by

(4.4)\begin{equation} \left.\begin{array}{c@{}} p_i(\boldsymbol{r}) \triangleq 2\dfrac{r_i}{r}, \\ {\mathsf{G}}_{ij}(\boldsymbol{r}) \triangleq \dfrac{\delta_{ij}}{r} + \dfrac{r_i r_j}{r}, \\ {\mathsf{T}}_{ijk}(\boldsymbol{r}) \triangleq -6 \dfrac{r_i r_j r_k}{r}, \end{array}\right\} \end{equation}

for $i, j, k \in \{1, \ldots , d\}$, where $\boldsymbol {r} \triangleq \boldsymbol {x} - \boldsymbol {y}$ and $r \triangleq \|\boldsymbol {r}\|$. To solve for the density we use the provided boundary and jump conditions, which, however, require limits as $\boldsymbol {x} \to \varSigma$ of the layer potentials above. These limits are generally not trivial, since the kernels in question are all singular at $\varSigma$. The jumps in the velocity, pressure and traction are well known and given in Pozrikidis (Reference Pozrikidis1992, § 4.1). Using the surface limits of the traction and the jump (2.3), we obtain the following equation (Pozrikidis Reference Pozrikidis1992, (5.3.9)):

(4.5)\begin{equation} 4 {\rm \pi}q_k(\boldsymbol{x}) - \frac{1 - \lambda}{1 + \lambda} n_i(\boldsymbol{x}) \mathcal{T}_{ik}[\boldsymbol{q}](\boldsymbol{x}) = -\frac{1}{1 + \lambda} [\![\, f_k ]\!] _\lambda. \end{equation}

In the non-homogeneous far-field boundary conditions case, additional terms can be added, as described in Pozrikidis (Reference Pozrikidis1992).

Equation (4.5) is a Fredholm integral equation of the second kind. The linear operator acting on the density $\boldsymbol {q}$ is generally well-conditioned. In fact, it matches the conditioning of the underlying physical problem, which is well-conditioned for $0 < \lambda < \infty$. This allows for efficient results by classic iterative linear solvers such as GMRES.

The layer potentials presented so far are sufficient to solve the forward Stokes equations (2.2), (2.3) and (2.8). However, in the case of the adjoint equations, we further require knowledge of the normal and tangential velocity gradients. They can be obtained by differentiating the velocity (4.1) and are given, for $\boldsymbol {x} \notin \varSigma$, by

(4.6)\begin{gather} (\boldsymbol{n} \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{u})_i(\boldsymbol{x}) = \mathcal{U}_i[\boldsymbol{q}](\boldsymbol{x}) \triangleq n_k(\boldsymbol{x}) \int_\varSigma {\mathsf{U}}_{ijk}(\boldsymbol{x}, \boldsymbol{y}) q_j(\,\boldsymbol{y}) \,{\rm d}S, \end{gather}
(4.7)\begin{gather}(\boldsymbol{t}^\alpha \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{u})_i(\boldsymbol{x}) = \mathcal{V}^\alpha_i[\boldsymbol{q}](\boldsymbol{x}) \triangleq t_k^\alpha(\boldsymbol{x}) \int_\varSigma {\mathsf{U}}_{ijk}(\boldsymbol{x}, \boldsymbol{y}) q_j(\,\boldsymbol{y}) \,{\rm d}S. \end{gather}

The kernel for the velocity gradient is given by

(4.8)\begin{equation} {\mathsf{U}}_{ijk}(\boldsymbol{x}, \boldsymbol{y}) = \dfrac{\partial }{\partial x_k} {\mathsf{G}}_{ij}(\boldsymbol{x}, \boldsymbol{y}) = \frac{r_k \delta_{ij} - r_i \delta_{jk} - r_j \delta_{ik}}{r^3} + 3 \frac{r_i r_j r_k}{r^5}. \end{equation}

The surface limits of the new layer-potential operators are not standard, but they can be easily derived from the main variables and the definition of the stress tensor (2.1).

Theorem 4.1 The jump conditions for the normal and tangential velocity gradient layer potentials, defined in (4.6)(4.7), are given by

(4.9)\begin{equation} \left.\begin{array}{c@{}} \displaystyle\lim_{\epsilon \to 0^+} \mathcal{U}_i[\boldsymbol{q}](\boldsymbol{x} \pm \epsilon \boldsymbol{n}) = \pm 4 {\rm \pi}(q_i - (q_j n_j) n_i) + \mathrm{p.v.} \int_\varSigma n_k(\boldsymbol{x}) {\mathsf{U}}_{ijk}(\boldsymbol{x}, \boldsymbol{y}) q_j(\,\boldsymbol{y}) \,{\rm d}S, \\ \displaystyle\lim_{\epsilon \to 0^+} \mathcal{V}^\alpha_i[\boldsymbol{q}](\boldsymbol{x} \pm \epsilon \boldsymbol{n}) = \mathrm{p.v.} \int_\varSigma t^\alpha_k(\boldsymbol{x}) {\mathsf{U}}_{ijk}(\boldsymbol{x}, \boldsymbol{y}) q_j(\,\boldsymbol{y}) \,{\rm d}S, \end{array}\right\} \end{equation}

where $\mathrm {p.v.}$ denotes the Cauchy principal value interpretation of the singular integral.

We must also obtain formulae for the stress and strain-rate tensors that appear in theorem 3.5. The stress tensor can most easily be obtained algebraically from the traction, the pressure and the components of the velocity gradient from (4.6) and (4.7), by means of (2.1). Then, the strain rate tensor can be directly obtained from (2.1). Explicit formulae for the stress components in both two and three dimensions are given in Aliabadi (Reference Aliabadi2002, § 2.5.1).

Finally, for the numerical implementation, we will make an axisymmetric assumption on the problem set-up, with coordinates $\boldsymbol {x} = (x, \rho , \phi )$ (see figure 1). With this assumption, we must also change the meaning of the fundamental solutions we have used thus far. For example, in the three-dimensional case, ${\mathsf{G}}_{ij}$ represents the interactions between a source point $\boldsymbol {y}$ and a target point $\boldsymbol {x}$ of strength $\boldsymbol {q}$. However, in the axisymmetric setting, we must consider the interactions between a target point $\boldsymbol {x}$ and a ring of source points $\boldsymbol {y}$ for fixed $(x_0, \rho _0)$. This is achieved by analytically integrating the kernels in the azimuthal direction $\phi$, with the aid of elliptic integrals. Several axisymmetric kernels have been derived in Pozrikidis (Reference Pozrikidis1992). We give in appendix D a short description of all the kernels required in this work.

Figure 1. Droplet in axisymmetric coordinates $(x, \rho , \phi )$.

4.2. Boundary element methods

The numerical discretization of the layer potentials from § 4.1 has been performed using a standard collocation method (also known as a boundary element method), similar to the methods described in Kress (Reference Kress1989, chapter 13.3). We will only present the details for a single droplet, as seen in figure 1. In an axisymmetric setting, all the integrals over the surface $\varSigma$ can be reduced to integrals over the top half-plane and a one-dimensional interface $\varGamma \triangleq \{\boldsymbol {x} \in \varSigma : \phi = 0\}$. We refer to Kress (Reference Kress1989), Aliabadi (Reference Aliabadi2002) or Pozrikidis (Reference Pozrikidis1992) for additional details on the construction of one-dimensional collocation methods and only briefly describe the chosen notation for this work.

We define a uniform one-dimensional computational grid with parameter $\xi \in [0, 1/2]$ discretized into $M$ elements $\varGamma _m \triangleq [\xi _m, \xi _{m + 1})$, for $m \in \{0, \ldots , M\}$. On each element $\varGamma _m$, we define a set of $N + 1$ uniformly distributed collocation points $\xi _{m, n}$, for a total of $N_c \triangleq (M \times N) + 1$ unique points. For a high-order finite element type discretization, we use the standard nodal Lagrange basis functions $\phi _n$. Finally, we use the standard Gauss–Legendre quadrature rule on regular elements and specialized methods for elements containing singularities.

4.3. Singularity handling and singular quadrature rules

The main difficulty in implementing a boundary element method lies in the handling of the singularities for each kernel. As seen in the previous sections, the adjoint problem requires all the quantities present in the Stokes problem with several different singularity types. The surface limit of the velocity layer potential (4.1) is weakly singular, with a $\log$-type singularity, and the remaining layer potentials, e.g. (4.3), are defined by means of the Cauchy principal value and are strongly singular, with a $r^{-1}$-type singularity.

There are multiple methods used to handle singular integrals, such as changes of variables, singularity subtraction, regularization or generalized quadrature rules (see Aliabadi Reference Aliabadi2002, chapter 11). We focus on making use of generalized quadrature rules and analytical techniques for accurate numerical integration. For all singularity types we rely on an element subdivision method, as seen in figure 2.

Figure 2. (a) Collocation points (black circles) on two adjacent elements and (b) element subdivisions and quadrature points (grey circles) for a singularity at point $\boldsymbol {x}_i$.

This method proceeds by isolating the element containing the currently evaluated target point $\boldsymbol {x}_i$ and splitting it into two elements with specially constructed quadrature rules on each. This allows reducing the problem to that of endpoint singularities, and minimizing the required quadrature rules. This is important because in the case of generalized quadrature rules, their construction depends on the location of the singularity, e.g. Carley (Reference Carley2006) and Kolm & Rokhlin (Reference Kolm and Rokhlin2001). There are two special cases that need to be handled: element endpoints consider the two existing adjacent elements as the subdivision and the two points at the poles ($\rho = 0$) only make use of the interior element for a one-sided evaluation.

4.3.1. Singularity subtraction

The pressure kernel (4.2), the traction kernel (4.3) and the normal velocity gradient kernel (4.6) can all be handled by the method of singularity subtraction. The singularity subtraction for the traction is described in Pozrikidis (Reference Pozrikidis1990). In the case of the pressure kernel, we make use of the well known identities

(4.10)\begin{equation} \int_{\varSigma} p_j(\boldsymbol{x}, \boldsymbol{y}) n_j(\,\boldsymbol{y}) \,{\rm d}S = 4 {\rm \pi}\quad \text{and} \quad \int_{\varSigma} \varepsilon_{ijk} p_j(\boldsymbol{x}, \boldsymbol{y}) n_k(\boldsymbol{x}) \,{\rm d}S = 0, \end{equation}

where $\varepsilon _{ijk}$ is the Levi-Civita symbol forming the cross product. We can express the pressure layer potential (4.2), for $\boldsymbol {x} \in \varSigma$, as

(4.11)\begin{equation} \mathcal{P}[\boldsymbol{q}](\boldsymbol{x}) = \int_{\varSigma} (p_j(\boldsymbol{x}, \boldsymbol{q}) q_j(\,\boldsymbol{y}) - \tilde{p}_j(\boldsymbol{x}, \boldsymbol{y}) q_j(\boldsymbol{x})) \,{\rm d}S + 4 {\rm \pi}(q_k n_k)(\boldsymbol{x}). \end{equation}

The new kernel $\tilde {p}_j$ is given by

(4.12)\begin{equation} \tilde{p}_j(\boldsymbol{x}, \boldsymbol{y}) \triangleq p_k(\boldsymbol{x}, \boldsymbol{y}) n_k(\,\boldsymbol{y}) n_j(\boldsymbol{x}) + p_j(\boldsymbol{x}, \boldsymbol{y}) n_k(\,\boldsymbol{y}) n_j(\boldsymbol{x}) - p_k(\boldsymbol{x}, \boldsymbol{y}) n_k(\boldsymbol{x}) n_j(\,\boldsymbol{y}). \end{equation}

The axisymmetric form of this kernel can be found in appendix D. Finally, the layer-potential representation of the normal velocity gradient (4.6) can be desingularized by noting that

(4.13)\begin{equation} {\mathsf{U}}_{ijk}(\boldsymbol{r}) = \tfrac{1}{2} ( p_k(\boldsymbol{r}) \delta_{ij} - p_i(\boldsymbol{r}) \delta_{jk} - p_j(\boldsymbol{r}) \delta_{ik} ) - \tfrac{1}{2} {\mathsf{T}}_{ijk}(\boldsymbol{r}) \end{equation}

and using the formulae derived for the pressure and traction layer potentials. Since the integrals are no longer singular, we use the standard Gauss–Legendre quadrature rules on each subdivision. The subdivisions are still required in this case, even though the singularity is removed, because the resulting integrand is generally not smooth (only continuous) and can degrade the accuracy of the solution.

4.3.2. Weakly singular integrals

For the weakly singular integrals in our problem, such as the velocity single-layer potential representation (4.1), we make use of the known Alpert quadrature rules. Alpert quadrature rules are a set of corrected trapezoidal rules that achieve high accuracy for $\log$ singularities. These quadrature rules can be constructed for endpoint singularities, as described in Alpert (Reference Alpert1999), and require no additional changes to the kernels.

4.3.3. Strongly singular integrals

The remaining layer potential representation that requires special handling is that of the tangential velocity gradient, (4.7). This kernel is strongly singular and does not have known solutions for singularity subtraction. Therefore, we require a set of generalized quadrature rules for Cauchy principal value integrals. We have made use of the quadrature rules described in Carley (Reference Carley2006), based on the work of Kolm & Rokhlin (Reference Kolm and Rokhlin2001).

However, we cannot make use of the subdivision method in this case. Applying the quadrature rules relies on a change of variables from the reference element $[-1, 1]$ to the surface element, which cannot be naively applied for strongly singular integrals with endpoint singularities (Monegato Reference Monegato1994). Instead, we construct unique quadrature rules for each interior collocation point in the reference element. As before, the element endpoints are handled by considering the union of the adjacent elements with a singularity at the origin $\xi = 0$. The two poles need to be handled separately, since they are by definition one-sided and do not allow a change of variables. Therefore, we are forced to construct quadrature rules directly on the elements $\varGamma _1$ and $\varGamma _M$. This is done by extending the results from Carley (Reference Carley2006) to arbitrary $[a, b]$ intervals. Unlike the previous cases, we require $N+1$ unique quadrature rules to handle the tangential velocity layer potential (4.7).

4.4. Geometry representation

The geometry is represented spectrally by Fourier modes, for high accuracy. Together with the discretization of the layer potentials from § 4.2, the result is a superparametric collocation method.

To compute the tangent vector, the normal vector and the curvature, we make use of fast Fourier transform. The fast Fourier transform is computed on the uniform grid formed by the collocation points, to avoid issues with non-uniform spacing. Since we only have access to the top half-plane, we mirror the geometry to form a closed curve on which the Fourier transform can be directly applied. Finally, all the quantities are interpolated to the required quadrature points using the Fourier coefficients computed at the uniform collocation points.

4.5. Evolution equation

The interface evolution (2.8) is integrated using the classic third-order strong-stability- preserving Runge–Kutta method

(4.14)\begin{equation} \left.\begin{array}{c@{}} \boldsymbol{k}_1 = \boldsymbol{X}^n + {\rm \Delta} \tau \boldsymbol{V}(\boldsymbol{u}^n, \boldsymbol{X}^n), \\ \boldsymbol{k}_2 = \tfrac{3}{4} \boldsymbol{X}^n + \tfrac{1}{4}\{ \boldsymbol{k}_1 + {\rm \Delta} \tau \boldsymbol{V}(\boldsymbol{u}^{n + 1}, \boldsymbol{k}_1) \}, \\ \boldsymbol{X}^{n + 1} = \tfrac{1}{3} \boldsymbol{X}^n + \tfrac{2}{3} \{ \boldsymbol{k}_2 + {\rm \Delta} \tau \boldsymbol{V}(\boldsymbol{u}^{n + {1}/{2}}, \boldsymbol{k}_2)\}. \end{array}\right\} \end{equation}

At each stage of the Runge–Kutta method, we solve the discrete Fredholm integral, equation (4.5), and compute the velocity with (4.1). The time step used is based on restrictions by the capillary number from Denner & van Wachem (Reference Denner and van Wachem2015) and is given by

(4.15)\begin{equation} {\rm \Delta} \tau \triangleq \theta \sqrt{\textit{Ca} h_{min}^3}, \end{equation}

where $\theta > 0$ is a Courant number and

(4.16)\begin{equation} h_{min} \triangleq \mathop{min}\limits_{m \in [\![ 0, M ]\!] } \left|\int_{\varGamma_m} \, {\rm d} S\right|, \end{equation}

is the smallest panel size in physical space. To maintain a smooth surface, we also perform smoothing by the penalized least squares method described in Eilers (Reference Eilers2003) at regular time intervals. In the studied cases, the interface deformation is sufficiently small and does not require advanced mesh adaptation techniques. We refer to Pozrikidis (Reference Pozrikidis1990) and Ojala & Tornberg (Reference Ojala and Tornberg2015) for a discussion on existing methods.

4.6. Discrete shape gradient

The discretization of the shape gradient in theorem 3.5 will be performed using finite element-based methods. The collocation points and Lagrange polynomial basis from the boundary element method are used for this purpose. The inner product used in the discretization is given by

(4.17)\begin{equation} (f, g) \triangleq \int_{\varGamma} f(\boldsymbol{x}) g(\boldsymbol{x}) \rho \, \textrm{d} s. \end{equation}

Discretely, this inner product is described using the mass matrix as follows:

(4.18)\begin{equation} (\boldsymbol{f}, \boldsymbol{g}) \triangleq f_i {\mathsf{M}}_{ij} g_j, \end{equation}

where the mass matrix ${\boldsymbol{\mathsf{M}}}$ is given by

(4.19)\begin{equation} {\mathsf{M}}_{ij} = \int_{\varGamma} \phi_i(\xi) \phi_j(\xi) \rho \, \textrm{d} s = \sum_{p = 0}^P \phi_i(\eta_p) \phi_j(\eta_p) \rho(\eta_p) J(\eta_p) w_p, \end{equation}

for Gauss–Legendre quadrature nodes and weights $(\eta _p, w_p)$. We obtain weak forms of the adjoint normal operator $n^*$ and the adjoint curvature operator $\kappa ^*$ from theorem 3.5 by a direct application of integration by parts. This helps avoid even higher-order derivatives of the geometry and reduces the required smoothness.

Taking the results from theorem 3.5, multiplying by the basis functions and integrating by parts as required, we obtain a linear system of the type

(4.20)\begin{equation} {\boldsymbol{\mathsf{M}}} \boldsymbol{g} = \boldsymbol{b}, \end{equation}

where $\boldsymbol {g}$ is the gradient and $\boldsymbol {b}$ is the right-hand side. To enforce a smooth boundary without cusps at the poles, we additionally require that

(4.21)\begin{equation} \dfrac{\mathrm{d} g}{\mathrm{d} \xi} (0) = \dfrac{\mathrm{d} g}{\mathrm{d} \xi} (1/2) = 0. \end{equation}

In our work, we use Lagrange multipliers to impose the boundary condition and augment the mass matrix accordingly. Note, however, that this condition should be automatically satisfied if the flow quantities are similarly smooth in theorem 3.5.

4.7. Discrete adjoint quasi-static equation

The adjoint evolution equation in lemma 3.7 is also discretized by a finite element method. Finite element methods on moving meshes have been described in Dziuk & Elliott (Reference Dziuk and Elliott2007) and we employ the same ideas to discretize (3.23). By integrating and using the surface Reynolds transport theorem from A.3, we obtain

(4.22)\begin{equation} - \dfrac{\mathrm{d} }{\mathrm{d} t} ({\boldsymbol{\mathsf{M}}}(t) \boldsymbol{X}^*) = \boldsymbol{V}^*[\boldsymbol{X}^*, \boldsymbol{w}^*; \boldsymbol{X}, \boldsymbol{w}], \end{equation}

where $\boldsymbol {V}^*$ now denotes the combined right-hand side of the evolution equation in lemma 3.7. As the terms are the same as those present in the shape gradient from § 4.6, the same steps are used to obtain an appropriate weak form. To maintain dual consistency at the time advancement level, we use the discrete adjoint of (4.14), namely

(4.23)\begin{equation} \left.\begin{array}{c@{}} \boldsymbol{k}^*_2 = \boldsymbol{X}^{*, n + 1} + {\rm \Delta} \tau \hat{\boldsymbol{V}}^*[\boldsymbol{X}^{*, n + 1}, \boldsymbol{w}^{*, n + 1}; \boldsymbol{k}_2, \boldsymbol{w}^{n + {1}/{2}}], \\ \boldsymbol{k}^*_1 = \boldsymbol{k}_2^* + {\rm \Delta} \tau \hat{\boldsymbol{V}}^*[\boldsymbol{k}_2^*, \boldsymbol{w}^{*, n}; \boldsymbol{k}_1, \boldsymbol{w}^{n + 1}], \\ \boldsymbol{X}^{*, n} = \tfrac{1}{3} \boldsymbol{X}^{*, n + 1} + \tfrac{1}{2} \boldsymbol{k}_2^* + \tfrac{1}{6} \{ \boldsymbol{k}^*_1 + {\rm \Delta} \tau \hat{\boldsymbol{V}}^*[\boldsymbol{k}^*_1; \boldsymbol{w}^{*, n - 1}; \boldsymbol{X}^n, \boldsymbol{w}^n]\}, \end{array}\right\} \end{equation}

where $\hat {\boldsymbol {V}}^* \triangleq \boldsymbol {M}^{-1} \boldsymbol {V}^*$. As the adjoint to (4.14), this method maintains the same stability region, but an analysis of its order conditions reveals that it is only second-order accurate. The gradient from theorem 3.8 is integrated in time using the same method, by noting that solving

(4.24)\begin{equation} \dfrac{\mathrm{d} g}{\mathrm{d} t} = -\frac{1}{\textit{Ca}^2} \int_{\varSigma(t)} \kappa \boldsymbol{u}^* \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S \end{equation}

with $g(T) = 0$, results in $\boldsymbol {\nabla } \mathscr {J}_2 = -g(0)$.

4.8. Optimization algorithm

The optimization method used in this work is based on gradient descent and requires first-order shape derivatives. For the problem (P1) they are given in theorem 3.5 and for problem (P2) they are given in theorem 3.8. The results provided in the respective theorems refer to the normal component of the gradient. Therefore, a descent direction for the directional derivative

(4.25)\begin{equation} D \mathscr{J}_1[\tilde{\boldsymbol{X}}] = \int_{\varSigma} (\boldsymbol{\nabla} _{\boldsymbol{X}} \mathscr{J}_1) \tilde{\boldsymbol{X}} \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S \end{equation}

is given by

(4.26)\begin{equation} \tilde{\boldsymbol{X}} = -\alpha \boldsymbol{\nabla} _{\boldsymbol{X}} \mathscr{J}_k \boldsymbol{n}, \end{equation}

for a positive constant $\alpha$. A similar reasoning holds for (P2), where the problem is greatly simplified by the fact that $\nabla _{\textit {Ca}} \mathscr {J}_2 \in \mathbb {R}$. This leads us to the gradient descent method described in algorithm 1.

Algorithm 1 Shape optimization algorithm

In algorithm 1, we have directly used the gradient as a descent direction, resulting in a steepest descent method. This choice is not always optimal or can lead to a jamming phenomenon. A survey of gradient descent methods is given in Hager & Zhang (Reference Hager and Zhang2006). In our work, we have used a conjugate gradient method based on the Fletcher–Reeves update with a step size $\alpha ^{(n)}$ chosen to satisfy the Armijo condition, but not the full Wolfe conditions.

It is well known in shape optimization problems, like (P1), that the surface shape gradient can have less regularity than the original solution. Intuitively, we can see this from the appearance of the normal velocity gradient or the derivatives of curvature in the shape gradient from theorem 3.5. This lack of smoothness can result in diverging results when the gradient is used repeatedly in a gradient descent method. To counter some of these issues we perform additional post-processing steps on the geometry $\boldsymbol {X}^{(n + 1)}$ after the update in algorithm 1. They are as follows.

  1. (1) The geometry is smoothed by a penalized least squares method, described in Eilers (Reference Eilers2003), where the penalization parameter is chosen at runtime.

  2. (2) Since $\boldsymbol {X}^{(n)}$ describes only the top half-plane of the axisymmetric geometry, we must maintain $\rho ^{(n + 1)} > 0$. This is ensured by a choosing a descent step $\alpha ^{(n + 1)}$ that is sufficiently small.

  3. (3) We also remove any translations by enforcing that the on-axis points satisfy

    (4.27)\begin{equation} x^{(n + 1)}(0) + x^{(n + 1)}(1/2) = 0. \end{equation}

With these additional steps, we have observed that the optimization procedure is sufficiently robust. In particular, accuracy in the computation of the required layer potentials in § 4.1 is maintained and the axisymmetric assumptions are enforced throughout. For more general applications, it may be desired to add additional regularization terms to the cost functional. For example, we can consider

(4.28)\begin{equation} \mathcal{J}_1' = \mathcal{J}_1 + \frac{\alpha}{2} \int_\varSigma \|\kappa\|^2 \, {\rm d} S \end{equation}

to penalize solutions with a non-smooth curvature. We can also use an $H^1$ inner product, which would require solving an additional Laplace–Beltrami equation on the interface.

Remark 4.2 The proposed algorithm for the shape optimization problem, (P1), does not take into account the structure of shape spaces (Michor & Mumford Reference Michor and Mumford2006). Since shape spaces are not flat, a general update formula can be written as

(4.29)\begin{equation} \boldsymbol{X}^{n + 1} = \mathscr{T}[ \boldsymbol{X}^{(n)} - \alpha^{(n)} \boldsymbol{\nabla} _{\boldsymbol{X}} \mathscr{J}_1^{(n)} \boldsymbol{n}^{(n)}], \end{equation}

where $\mathscr {T}$ is a parallel transport operator. Taking the additional structure into account can lead to better behaved algorithms, as seen in Ring & Wirth (Reference Ring and Wirth2012) and Schulz (Reference Schulz2014).

The optimization problem defined by (P1) is simpler in this regard, since it takes place in $\mathbb {R}_+$ and relies on the smoothness of the evolution equation (2.8). Here, we can use the standard steepest descent method with no additional post processing.

5. Numerical results

In this section, we present our results for the optimization of the two problems we have set out in § 3, namely the shape optimization problem, (P1), and the parameter optimization problem, (P2). All the presented results will be for a single axisymmetric droplet. The formulation of the adjoint equation and gradients from § 3 are general and can be applied to one-, two- or three-dimensional systems. The numerical methods described in § 4 can also be generalized to multiple droplets, with care needed for near-singular evaluations of the layer potentials.

In the following, we will present results for an ($N = 4)$-order collocation scheme. We have chosen this order as a baseline because the adjoint-based gradient from theorem 3.5 contains a second derivative of the velocity field (in the form of $\kappa ^*$), so the result will always be of order $N - 1$ due to the weak formulation. We maintain the same discretization parameters for all the tests we present, as detailed in table 1. The different types of quadrature rules are given in § 4.3.

Table 1. Discretization parameters used in tests 1–6.

The errors are computed with the aid of the discrete $L^2$ norm, defined as

(5.1)\begin{equation} \|\boldsymbol{g}\|_2^2 \triangleq (\boldsymbol{g}, \boldsymbol{g}) = g_i {\mathsf{M}}_{ij} g_j, \end{equation}

where ${\boldsymbol{\mathsf{M}}}$ is the mass matrix. The reported errors are relative, i.e.

(5.2)\begin{equation} E[\hat{\boldsymbol{g}}] \triangleq \frac{\|\boldsymbol{g} - \hat{\boldsymbol{g}}\|_2}{\|\boldsymbol{g}\|_2}, \end{equation}

where $\boldsymbol {g}$ is a known exact solution and $\hat {\boldsymbol {g}}$ is the approximation. In computing orders of convergence, we use the largest element size as an abscissa, i.e.

(5.3)\begin{equation} h_{max} \triangleq \mathop{max}\limits_{m \in [\![ 0, M ]\!] } |\varGamma_m| = \mathop{max}\limits_{m \in [\![ 0, M ]\!] } \int_{\varGamma_m} \, {\rm d} S, \end{equation}

where the element lengths are computed using the same quadrature rule as the mass matrix ${\boldsymbol{\mathsf{M}}}$. The orders of convergence, e.g . in table 2, are then computed by a linear fit of $(h_{max}, E)$. The forward problem, consisting of the Stokes solver and the interface evolution equation are validated in appendix E. We proceed here to validate the adjoint-based gradients and describe optimization results.

Table 2. Test 1: relative errors in finite difference versus adjoint gradients for a finite difference step size $\epsilon = 10^{-6}$.

5.1. Adjoint gradient

We analyse here the validity of the adjoint-based gradients from theorem 3.5 and theorem 3.8. In both cases, the gradients will be validated by comparison against finite different approximations. Further validation is presented in the next section by finding optimal solutions to known problems.

5.1.1. Test 1: static problem gradient validation

We start by validating the adjoint gradient from theorem 3.5 by a standard first-order finite difference approximation

(5.4)\begin{equation} (\boldsymbol{g}_{FD})_i = (\boldsymbol{\nabla} \mathscr{J}_1, \boldsymbol{h}_i) \approx \frac{\mathscr{J}_1(\boldsymbol{X} + \epsilon \boldsymbol{h}_i) - \mathscr{J}_1(\boldsymbol{X})}{\epsilon}, \end{equation}

where $(\boldsymbol {\cdot }, \boldsymbol {\cdot })$ is the inner product (4.17). To ensure that the perturbed interface $\boldsymbol {X}_\epsilon = \boldsymbol {X} + \epsilon _i \boldsymbol {h}_i$ remains smooth and satisfies the axisymmetric assumptions for every $i$, we choose $\boldsymbol {h}_i \triangleq h_i \boldsymbol {n}$ where

(5.5)\begin{equation} h_i(\xi) \triangleq \sin(2 {\rm \pi}\cos(2 {\rm \pi}\xi)) \exp\left(-\frac{(\xi - \xi_i)^2}{2 \sigma^2}\right). \end{equation}

To compare against the adjoint gradient, we also dot the adjoint gradient with the bump functions in the same way, i.e. $(\boldsymbol {g}_{AD})_i = (\boldsymbol {g}, \boldsymbol {h}_i)$ with $\boldsymbol {g}$ given by (4.20).

We present results for panel sizes $M \in \{8, 16, 24, 32\}$, finite difference step sizes $\epsilon \in \{10^{-d}: d \in [\![ 1, 7 ]\!] \}$ and a finite difference bump thickness $\sigma ^2 = 10^{-2}$. For the test set-up, we use $u_d \equiv 0$ in (P1) with $\textit {Ca} = 1$ and $\lambda = 0.2$ in an extensional flow

(5.6a,b)\begin{equation} \boldsymbol{u}^\infty = x \boldsymbol{e}_x - \tfrac{1}{2} \rho \boldsymbol{e}_\rho \quad \text{and} \quad \quad p^\infty = 0, \end{equation}

where the geometry is given by an ellipse, for $(a, b) = (1, 2)$, parameterized as

(5.7)\begin{equation} \left.\begin{array}{c@{}} \boldsymbol{x}(\xi) = r(\xi) (\cos (2{\rm \pi} \xi), \sin(2{\rm \pi} \xi)), \\ r(\xi) = \dfrac{a b}{\sqrt{(a \cos (2 {\rm \pi}\xi))^2 + (b \sin(2 {\rm \pi}\xi))^2}}. \end{array}\right\} \end{equation}

The relative errors and gradients can be seen in figure 3. We expect that for a fixed and sufficiently fine mesh, we retrieve the first-order convergence in $\epsilon$. We also present the exact values of the errors for $\epsilon = 10^{-6}$ in table 2. For a fixed finite difference step size, we retrieve the expected discretization order $(N - 1)$ or $3$. We can also see that the errors plateau at $\approx \epsilon$, when the finite difference errors dominates.

Figure 3. Test 1: (a) $\log _{10}$ relative errors for finite difference versus adjoint gradient with $M = \{8, 16, 24, 32\}$ elements, where the full line is $O(\epsilon )$; (b) pointwise errors for the finest case with $M = 32$ elements for full range of $\epsilon$.

5.1.2. Test 2: quasi-static problem gradient validation

To validate the adjoint gradient of the quasi-static problem, as given by theorem 3.8, we perform a standard finite difference method, since $\textit {Ca} \in \mathbb {R}$. We have that

(5.8)\begin{equation} \boldsymbol{\nabla} _{\textit{Ca}} \mathscr{J}_2 \approx \frac{\mathscr{J}_2(\textit{Ca} + \epsilon) - \mathscr{J}_2(\textit{Ca})}{\epsilon}. \end{equation}

For this problem, we take $\textit {Ca} = 0.05, \lambda = 1$ and the far-field (5.6a,b). The interface equation, (2.8), is evolved to $T = 1$ with ${\rm \Delta} \tau = 10^{-3}$ with the initial condition $\boldsymbol {X}(0)$ being a sphere of radius $R = 1$. The desired geometry $\boldsymbol {X}_d$ is obtained by evolving the interface with the same set-up for $\textit {Ca}_d = 0.1$. We have not applied additional smoothing to this case, as we have found it skews the finite difference comparison. Choosing a small capillary number and a small time step instead ensures stability for the duration of the simulation.

In figure 4a, we present the convergence for $\epsilon \in \{10^{-d}: d \in [\![ 1, 7 ]\!] \}$. We can see that the error for a fixed discretization $M = 32$ behaves as expected, with an initial first-order accuracy. However, in the case from figure 4(b), we can see that the discretization error behaves considerably better than expected achieving close to fourth-order convergence. This is an artefact of the fact that $\lambda = 1$, resulting in smaller errors as in table 2, and that the deformation in the drop is small for $\textit {Ca} = 0.05$. We present in table 3, the values for the case where $\textit {Ca} = 0.1$ and $\textit {Ca}_d = 0.05$ and obtain values closer to the expected third order.

Figure 4. Test 2: (a) relative errors in finite difference versus adjoint gradient for a fixed panel count $M = 32$, where the full line is $O(\epsilon )$ convergence; (b) relative errors for a fixed finite difference step size $\epsilon = 10^{-5}$ and $M \in \{8, 12, 16, 24, 32\}$, where the full line is $O(h_{{max}}^3)$ convergence.

Table 3. Test 2: relative errors in finite difference versus adjoint gradients. Case 1 is $\textit {Ca} = 0.05$ and $\textit {Ca}_d = 0.1$ (see figure 4b); case 2 is $\textit {Ca} = 0.1$ and $\textit {Ca}_d = 0.05$.

5.2. Optimization

Finally, we will look at finding close to optimal solutions to the optimization problems (P1) and (P2). These test cases use all the machinery described in § 4. In particular, we will allow post processing of the interface during its evolution and during the gradient descent (as described in § 4.8).

5.2.1. Test 3: shape optimization

The first test we will perform is a classic shape optimization problem based on (P1). For this problem, we use a high viscosity ratio $\lambda = 48.02$ and capillary number of $\textit {Ca} = 1$, corresponding roughly to a water droplet suspended in air. The far-field boundary conditions for the velocity and pressure are given by

(5.9a,b)\begin{equation} \boldsymbol{u}^\infty = x \boldsymbol{e}_x - \rho \boldsymbol{e}_\rho \quad \text{and} \quad p^\infty = 0. \end{equation}

The initial shape in the optimization is a sphere of radius $R = 1$ and the desired normal velocity field $u_d$ in (3.1) is given by the solution on a geometry from Veerapaneni et al. (Reference Veerapaneni, Gueyffier, Biros and Zorin2009), namely

(5.10)\begin{equation} \left.\begin{array}{c@{}} r(\xi) \triangleq \sqrt{( a \cos^2 (2 {\rm \pi}\, \xi))^2 + (b \sin (2 {\rm \pi}\, \xi))^2)} + \cos^2 (2 {\rm \pi}k\, \xi), \\ \boldsymbol{x}(\xi) \triangleq r(\xi) (\cos(2 {\rm \pi}\, \xi), \sin(2 {\rm \pi}\, \xi)), \end{array}\right\} \end{equation}

with $a = 1, b = 1$ and $k = 1$. The initial geometries and velocity fields are shown in figure 5. The optimization is performed using a standard conjugate gradient descent, as described in algorithm 1 from § 4.8, using the Fletcher–Reeves update formula from Hager & Zhang (Reference Hager and Zhang2006). We choose a regularization parameter of $10^{-2}$ (Eilers Reference Eilers2003) and the smoothing is performed at every step of the optimization. Finally, the discretization continues to follow the parameters described in table 1 with $M = 64$ elements.

Figure 5. Test 3: (a) geometry and (b) normal velocity. Initial conditions (dashed), desired solution (dotted) and optimal solution ($\bullet$).

The optimal solutions are shown in figure 5. We can see that the algorithm reaches both the desired geometry and the desired normal velocity field. In figure 6 we have the per-iteration values of the normalized cost functional and gradient, defined as

(5.11a,b)\begin{equation} \hat{\mathcal{J}}^{(n)} = \frac{\mathcal{J}^{(n)}}{\mathcal{J}^{(1)}} \quad \text{and} \quad \|\hat{\boldsymbol{g}}^{(n)}\|_\infty = \frac{\|\boldsymbol{g}^{(n)}\|_\infty}{\|\boldsymbol{g}^{(1)}\|_\infty}. \end{equation}

Figure 6. Test 3: (a) cost functional and (b) descent directions. Absolute values (full) normalized by first iteration (dashed) and Fletcher–Reeves descent direction (dotted).

The optimization algorithm was terminated after $128$ iterations. We can see that the cost functional itself was decreased by approximately six orders of magnitude and plateaus when approaching the optimum. We have found that, with the chosen smoothing, the optimization is sufficiently robust and optimal solutions can be retrieved, with reasonable choices of initial conditions. Of course, this is a highly nonlinear optimization problem, so we do expect the existence of local minima, which a gradient descent algorithm cannot overcome.

5.2.2. Test 4: matching droplet evolution

Next, we look at a quasi-static optimization problem. The physical set-up we will be considering is Taylor's ‘four roller’ apparatus, described and analysed in Taylor (Reference Taylor1934) (see figure 7). The same deformation problem of a droplet in extensional flow was studied in Stone & Leal (Reference Stone and Leal1989). The problem is one of boundary control, where the boundary conditions at ‘infinity’ are given by the (dimensional) velocity field

(5.12)\begin{equation} \boldsymbol{u}^\infty = C (x \boldsymbol{e}_x - \rho \boldsymbol{e}_\rho), \end{equation}

with $C > 0$, the strain rate, as a control parameter. In the non-dimensional equations, this amounts to using the capillary number

(5.13)\begin{equation} \textit{Ca} = \frac{\mu_+ C}{\gamma}, \end{equation}

as a control variable. The viscosity ratio is $\lambda = 48.02$. This is a high viscosity ratio, similar to Taylor's droplet formed from a tar–pitch mixture.

Figure 7. Test 4: schematic of Taylor's ‘four roller’ apparatus (not to scale).

The optimization is set-up as follows. The initial guess is given by $\textit {Ca}^{(0)} = 0.1$. The desired geometry in (3.1) is obtained by evolving (2.8) with a capillary number $\textit {Ca}_d = 0.05$. This allows achieving $\mathscr {J}_2 \approx 0$. For the optimization, we use the steepest descent choice of the descent direction, since the control parameter is a scalar. The optimization is run until convergence.

Finally, to evolve the interface, we use the third-order strong-stability-preserving Runge–Kutta method described in § 4.5, with an initial condition $\boldsymbol {X}(0)$, a sphere of radius $R = 1$. The simulation is run to a final time $T = 1$ and a time step of ${\rm \Delta} \tau = 10^{-3}$, with a coarse mesh of $M = 16$ elements and the parameters from table 1. We also include the tangential forcing described in Loewenberg & Hinch (Reference Loewenberg and Hinch1996) and a smoothing filter from Eilers (Reference Eilers2003), with a filter strength of $10^{-4}$.

The results of the optimization can be seen in figure 8. As before, we report the cost and gradient normalized by their initial values. We can see that the cost functional converges to machine precision in only $14$ conjugate gradient iterations.

Figure 8. Test 4: (a) cost functional and (b) gradient. Normalized by first iteration (dashed) and absolute values (full).

5.3. Test 5: propulsion by Marangoni effects

Finally, we present a quasi-static problem where the control is a prescribed variable surface tension. Such a variation in surface tension can be achieved by a varying surfactant concentration, temperature gradients or electric fields. The first two cases have been studied in Muradoglu & Tryggvason (Reference Muradoglu and Tryggvason2008), for example, to which we refer for further details. We will attempt to find an optimal surfactant distribution on the droplet surface to maximize its propulsion in an otherwise quiescent flow. Due to the variable surface tension, the jump in traction becomes (Prosperetti & Tryggvason Reference Prosperetti and Tryggvason2009)

(5.14)\begin{equation} [\![\, \boldsymbol{f} ]\!] _\lambda = \frac{1}{\textit{Ca}_0} (\gamma \kappa \boldsymbol{n} + \nabla_\varSigma \gamma), \end{equation}

where we take a non-dimensional surface tension that is a perturbation of the constant case

(5.15)\begin{equation} \gamma \triangleq 1 + \epsilon \tilde{\gamma}, \end{equation}

for a small $\epsilon$. The control is taken to be $\tilde {\gamma }$ in the cost functional

(5.16)\begin{equation} \mathscr{J}_3(\tilde{\gamma}) = \tfrac{1}{2} |\boldsymbol{x}_c(T) - \boldsymbol{x}_d|^2, \end{equation}

where $\boldsymbol {x}_d$ is a desired centroid and $\boldsymbol {x}_c(t)$ is the centroid at time $t$, as given by

(5.17)\begin{equation} \boldsymbol{x}_c \triangleq \frac{1}{|\varOmega_-|} \int_{\varOmega_-(t)} \boldsymbol{x} \, \textrm{d} V =\frac{1}{|\varOmega_-|} \int_{\varSigma(t)} \frac{1}{2} \|\boldsymbol{x}\|^2 \boldsymbol{n} \, {\rm d} S. \end{equation}

As this is a different optimization problem, we must expand on the results from § 3 to allow for the proposed changes. First, we have that the shape gradient and adjoint-based gradient of the cost functional are

(5.18)\begin{equation} \left.\begin{array}{c@{}} \boldsymbol{\nabla} _{\boldsymbol{X}} \mathcal{J}_3 = \dfrac{1}{|\varOmega_-|} (\boldsymbol{x}_c(T) - \boldsymbol{x}_d) \boldsymbol{\cdot} (\boldsymbol{X}(T) - \boldsymbol{x}_c(T)), \\ \nabla_{\tilde{\gamma}} \mathscr{J}_3 = \dfrac{\epsilon}{\textit{Ca}_0} (2 \kappa \boldsymbol{u}^* \boldsymbol{\cdot} \boldsymbol{n} + (\boldsymbol{n} \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{u}^*) \boldsymbol{\cdot} \boldsymbol{n}). \end{array}\right\} \end{equation}

To finalize the definitions from lemma 3.7, the source term arising from the jump conditions becomes

(5.19)\begin{equation} \mathcal{C}^*[\boldsymbol{w}^*; \boldsymbol{X}] \triangleq \frac{1}{\textit{Ca}_0} ( \kappa^*[\gamma \boldsymbol{u}^* \boldsymbol{\cdot} \boldsymbol{n}] + n^*[\gamma \kappa \boldsymbol{u}^* \boldsymbol{\cdot} \boldsymbol{t}] + \kappa \boldsymbol{u}^* \boldsymbol{\cdot} \boldsymbol{j} + \langle \boldsymbol{n} \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{u}^* \rangle \boldsymbol{\cdot} \boldsymbol{j} ), \\ \end{equation}

where $\boldsymbol {j} \triangleq \gamma \kappa \boldsymbol {n} + \nabla _\varSigma \gamma$. An important difference in this case is that the last term now involves an averaging of the normal adjoint velocity gradient, since $\boldsymbol {j}$ also has a tangential component. All the terms can be obtained by following the steps described in appendix C.

For the fluid problem, we choose a quiescent flow $\boldsymbol {u}^\infty = 0$ and $p^\infty = 0$ with an initial condition $\boldsymbol {X}(0)$ given as a sphere of radius $R = 1$. The parameters are given by $\textit {Ca}_0 = 0.05, \lambda = 1$ and $\epsilon = -0.15$. The equations are evolved to a final time of $T = 1$ with a Courant number of $\theta = 0.75$ in (4.15). We have found that the resulting time steps ensure stability for all the configurations found during the optimization.

The optimization is started with an initial guess of

(5.20)\begin{equation} \tilde{\gamma}^{(0)} \triangleq \sin\left(\frac{\rm \pi}{4} \cos (2 \theta) \right), \end{equation}

which is symmetric around $\theta = {\rm \pi}/ 2$, as seen in figure 9(b). Due to this symmetry, the initial guess does not transport the droplet, but simply changes its shape, while remaining centred at $\boldsymbol {x}_c(0) = 0$. The desired centroid is taken to be at a distance of 10 times the radius of the initial sphere, i.e. we have $\boldsymbol {x}_d = (10R, 0)$. For the descent direction, we have used the simpler steepest descent.

Figure 9. Test 5: (a) cost functional and (b) surface tension along iterations of the optimization algorithm.

The cost functional and the control $\tilde {\gamma }$ during the optimization can be seen in figure 9 and figure 10. As before we can see that the cost correctly converges to the desired centroid location with a final value that approaches machine precision. The optimization was stopped when a step size that satisfied the Armijo condition could not be found, due to the small value of the cost functional. Additional results for a non-symmetric initial distribution of $\tilde {\gamma }$ have shown a clear tendency to evolve the droplets in the direction of larger surface tension, as also shown in Muradoglu & Tryggvason (Reference Muradoglu and Tryggvason2008) for a more general case. This is a well known result for variable surface tension problems.

Figure 10. Test 5: geometry at the final time $T$ along iterations of the optimization algorithm.

6. Conclusions

In the current work, we have presented a derivation of the adjoint equations and adjoint-based gradient for two multiphase flow problems with a sharp interface. On a theoretical level, the main difficulty in handling sharp interfaces lies in the fact that the problem variables are not shape differentiable over the whole domain and must be considered separately over each fluid domain. We extend preliminary results from scalar problems with discontinuous coefficients (Pantz Reference Pantz2005) to the vector Stokes problem. This allows us to derive a general set of adjoint equations for static and quasi-static cases.

The optimization problems we have investigated naturally reduced to the fluid–fluid interface. However, the resulting gradients are highly non-trivial functions of the state and adjoint variables, as well as their derivatives (see § 3). This makes a numerical approximation of the shape gradients difficult. To work around these difficulties, we use boundary integral methods, which allow high-order representations of boundary restrictions for all the quantities of interest. As we have reported in § 5, using boundary integral methods we are able to numerically verify the adjoint-based gradients to at least single precision. We present two optimization problems – a static shape optimization problem and a quasi-static interface tracking optimization – that also reach their known minima to orders of $10^{-8}$ or less. A non-trivial application to variable surface tension problems is also given, showing how the current results can be extended to more complex scenarios.

The numerical results we have presented are axisymmetric and reduce to one-dimensional interfaces. However, the theoretical results concerning the adjoint equations and adjoint-based gradients are independent of the spatial dimension. A fully three-dimensional implementation would allow for a much wider variety of applications. We believe the current work sufficiently motivates the continuous equations and prepares for such future applications. In particular, we anticipate the formulation and results to be useful in verifying forthcoming finite-inertia multiphase flow solvers with adjoint capabilities and in extensions to droplet control with electrostatic or other body forces.

Even so, many open questions remain. A more rigorous footing is required regarding the existence and uniqueness of optimal controls for the presented optimization problems. In general, we have not shown that the adjoint equations and adjoint-based gradients have the required regularity. Extensions to different classes of cost functionals are also required in practice, e.g. problems of drag minimization are very common. However, their shape differentiability in the multiphase case is rarely obvious. Finally, the numerical methods required to accurately represent shape gradients, such as the ones presented in theorem 3.5, are not trivial to implement in the general three-dimensional case. A possible route forward in this regard would require deriving an equivalent volumetric representation of the shape gradient (Giacomini, Pantz & Trabelsi Reference Giacomini, Pantz and Trabelsi2017; Feppon et al. Reference Feppon, Allaire, Bordeu, Cortial and Dapogny2019), which requires less regularity.

Funding

This work was sponsored, in part, by the Office of Naval Research (ONR) as part of the Multidisciplinary University Research Initiatives (MURI) Program, under grant number N00014-16-1-2617.

Declaration of interests

The authors report no conflict of interest.

Appendix A. Notation

We give here a brief description of the notation and main auxiliary results used throughout the paper. We start by defining some functional spaces that appear.

We denote by $\mathcal {C}^k(\varOmega )$, with $\varOmega \subset \mathbb {R}^d$, the space of all $k$-times continuously differentiable functions equipped with the uniform convergence norm. We also define $W^{s, p}(\varOmega )$ as the standard Sobolev spaces of real order $s$ with derivatives in $L^p(\varOmega )$, as described in Temam (Reference Temam2001, chapter 1), Delfour & Zolésio (Reference Delfour and Zolésio2001) and the references therein. We denote by $H^s(\varOmega ) \triangleq W^{s, 2}(\varOmega )$ the corresponding family of Hilbert spaces. If a mapping $f$ is a function of both time and space, we make use of the common notation for Bochner spaces, i.e. we say that $f \in X([0, T]; Y)$ if $f(\cdot , x) \in X([0, T])$, for fixed $x$, and $f(t, \cdot ) \in Y$, for fixed $t$. A similar notation will be employed for vector quantities, where we say that $\boldsymbol {v} \in X(\varOmega; \mathbb {R}^d)$ if each component $v_i \in X(\varOmega )$.

Let $X$ be a Banach space, then we denote its topological dual by $X^*$. If $A: X \to Y$ is a linear operator, where $X$ and $Y$ are Banach spaces, then we define its adjoint by the linear operator $A^*: Y^* \to X^*$ that satisfies

(A1)\begin{equation} \langle A[x], y^* \rangle_{Y \times Y^*} = \langle x, A^*[y^*] \rangle_{X \times X^*}, \end{equation}

for all $x \in X$ and $y^* \in Y^*$, where $\langle \cdot , \cdot \rangle _{X \times X^*}$ denotes the dual mapping. In the case of Hilbert spaces, such as $H^k(\varOmega ) \triangleq W^{2, k}(\varOmega )$, we can identify the dual with the space itself and the dual mapping with the chosen inner product. We have the following definition of the Fréchet derivative from Luenberger (Reference Luenberger1997).

Definition A.1 Let $f: X \to Y$ be a mapping, with $X$ and $Y$ Banach spaces. If, for every point $x \in X$ and direction $h \in X$, there exists a bounded linear operator (with respect to $h$) $D f(x): X \to Y$ such that

(A2)\begin{equation} \lim_{\|h\|_X \to 0} \frac{\|f(x + h) - f(x) - D f(x) [h]\|_Y}{\|h\|_X} = 0, \end{equation}

then $f$ is said to be Fréchet differentiable at $x$ and $D f(x) [h]$ is its Fréchet derivative in the direction $h$.

If the space $X$ is the direct sum of spaces $X_n$, i.e. $X = \bigoplus _{n} X_n$, then we define the following notation for the partial derivative

(A3)\begin{equation} \dfrac{\partial f}{\partial x_n} [h_n] \triangleq D(x)[(0, \ldots, h_n, \ldots, 0)]. \end{equation}

If the space $Y = \mathbb {R}$, $f$ is usually called a functional. In this case, we can also define the gradient of $f$, if it exists, as the point $\boldsymbol {\nabla } f(x) \in X^*$ that satisfies

(A4)\begin{equation} D f(x)[h] = \langle h, \boldsymbol{\nabla} f(x) \rangle_{X \times X^*}, \end{equation}

for the chosen dual mapping on $X$. If $X$ is a Hilbert space, the existence of the gradient is guaranteed by the Riesz representation theorem (Luenberger Reference Luenberger1997, § 5.3).

If $X \triangleq \mathbb {R}^d$, we employ the usual notions of tensor calculus on Cartesian spaces. To avoid ambiguities, we will define our tensor derivatives using the common summation convention over repeated indices. We have that, for a vector $\boldsymbol {u}$ and a tensor $\sigma$,

(A5a,b)\begin{equation} (\boldsymbol{\nabla} \boldsymbol{u})_{ij} \triangleq \nabla_i u_j \quad \text{and} \quad (\boldsymbol{\nabla} \boldsymbol{\cdot} \sigma)_j \triangleq \nabla_i \sigma_{ij}, \end{equation}

are the vector gradient and tensor divergence, respectively. We also denote the left and right multiplications between a vector and a tensor as

(A6a,b)\begin{equation} (\boldsymbol{n} \boldsymbol{\cdot} \sigma)_j \triangleq n_i \sigma_{ij} \quad \text{and} \quad (\sigma \boldsymbol{n})_i \triangleq \sigma_{ij} n_j. \end{equation}

Finally, we use the single dot notation for the Frobenius inner product of two second-order tensors, i.e. $\boldsymbol{\mathsf{A}} \boldsymbol {\cdot } \boldsymbol{\mathsf{B}} \triangleq {\mathsf{A}}_{ij} {\mathsf{B}}_{ij}$. Of special interest in our case are notions of tangential calculus (Delfour & Zolésio Reference Delfour and Zolésio2001). The divergence theorem and Reynolds transport theorem are well known on a volume $\varOmega$ and the following theorems provide extensions to moving surfaces.

Theorem A.2 (Surface divergence theorem)

Let $\boldsymbol {v} \in \mathcal {C}^1(\varSigma; \mathbb {R}^d)$, where $\varSigma$ is a smooth manifold of dimension $d - 1$. Then

(A7)\begin{equation} \int_{\varSigma} \nabla_\varSigma \boldsymbol{\cdot} \boldsymbol{v} \, {\rm d} S = \int_\varSigma \kappa \boldsymbol{v} \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S + \int_{\partial \varSigma} \boldsymbol{v} \boldsymbol{\cdot} \boldsymbol{\mu} \, {\rm d} S, \end{equation}

where $\kappa$ denotes the additive curvature (twice the mean curvature) and the surface divergence is defined in Delfour & Zolésio (Reference Delfour and Zolésio2001, § 5.4). The conormal is defined as $\boldsymbol {\mu } \triangleq \boldsymbol {\tau } \times \boldsymbol {n}$, where $\boldsymbol {\tau }$ is the boundary tangent positively oriented with respect to $\boldsymbol {n}$.

Theorem A.3 (Surface Reynolds transport theorem)

Let $f \in \mathcal {C}^1([0, T]; \mathcal {C}^1(\varOmega ))$ and $\varSigma (t)$ be a moving surface with velocity $\boldsymbol {w}$. Then

(A8)\begin{align} \dfrac{\mathrm{d} }{\mathrm{d} t} \int_{\varSigma(t)} f(t, \boldsymbol{x}) \,{\rm d}S &= \int_{\varSigma(t)} \dfrac{\partial f}{\partial t} \, {\rm d} S + \int_{\varSigma(t)} (\boldsymbol{\nabla} f \boldsymbol{\cdot} \boldsymbol{n} + \kappa f) \boldsymbol{w} \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S \nonumber\\ &=\int_{\varSigma(t)} \dfrac{\mathrm{d} f}{\mathrm{d} t} \, {\rm d} S + \int_{\varSigma(t)} \nabla_\varSigma \boldsymbol{\cdot} \boldsymbol{w} f \, {\rm d} S. \end{align}

Appendix B. Axisymmetric geometry shape derivatives

In this appendix, we provide a statement of the required shape derivatives of the main geometric quantities in an axisymmetric setting, i.e. for the tangent vector, normal vector and the curvature. Similar results in the two-dimensional setting can be found in Castro et al. (Reference Castro, Lozano, Palacios and Zuazua2007) and in the three-dimensional setting in Walker (Reference Walker2015). The corresponding adjoint operators are not provided in the given references, but can be derived easily from the linearized results.

In the context of shape optimization, we know from the Hadamard–Zolésio structure theorem that only the normal component of the perturbation is relevant to first order. However, we present here the results in their full generality, so that extensions to second-order Newton-based methods can also be performed, if desired. The required simplifications can be performed by decomposing a perturbation in

(B1)\begin{equation} \tilde{\boldsymbol{X}} = \tilde{X}_n \boldsymbol{n} + \tilde{X}_t \boldsymbol{t}, \end{equation}

and applying the Frenet–Serret formulae. To derive the corresponding adjoint operators, we use the inner product (4.17). The adjoint operators are derived according to the definitions from appendix A using integration by parts.

In the following, we will give the definitions of all the geometric quantities (their linearizations and corresponding adjoint operators) that are required in the statement of the adjoint-based gradient in § 3.1 and § 3.2. The notation employed is that $f'$ denotes the Eulerian shape derivative and $f^*$ denotes the corresponding adjoint operator.

  1. (1) Tangent vector. The tangent vector operators are

    (B2)\begin{equation} \left.\begin{array}{c@{}} \boldsymbol{t}(\boldsymbol{X}) \triangleq \dfrac{\mathrm{d}\kern0.006em \boldsymbol{X}}{\mathrm{d} s} = \left( \dfrac{\mathrm{d}\kern0.006em x}{\mathrm{d} s}, \dfrac{\mathrm{d} \rho}{\mathrm{d} s} \right), \\ \boldsymbol{t}'[\tilde{\boldsymbol{X}}] \triangleq \left( \dfrac{\mathrm{d}\kern0.006em \tilde{\boldsymbol{X}}}{\mathrm{d} s} \boldsymbol{\cdot} \boldsymbol{n} \right) \boldsymbol{n}, \\ \boldsymbol{t}^*[\boldsymbol{X}^*] \triangleq -\dfrac{1}{\rho} \dfrac{\mathrm{d} }{\mathrm{d} s} (\rho \boldsymbol{X}^* \boldsymbol{\cdot} \boldsymbol{n}) \boldsymbol{n} - \kappa_x (\boldsymbol{X}^* \boldsymbol{\cdot} \boldsymbol{n}) \boldsymbol{t}. \end{array}\right\} \end{equation}
  2. (2) Normal vector. The normal vector operators are

    (B3)\begin{equation} \left.\begin{array}{c@{}} \boldsymbol{n}(\boldsymbol{X}) \triangleq \left( \dfrac{\mathrm{d} \rho}{\mathrm{d} s}, - \dfrac{\mathrm{d}\kern0.7pt x}{\mathrm{d} s} \right), \\ \boldsymbol{n}'[\tilde{\boldsymbol{X}}] \triangleq -\left( \dfrac{\mathrm{d}\kern0.7pt \tilde{\boldsymbol{X}}}{\mathrm{d} s} \boldsymbol{\cdot} \boldsymbol{n} \right) \boldsymbol{t}, \\ \boldsymbol{n}^*[\boldsymbol{X}^*] \triangleq \dfrac{1}{\rho} \dfrac{\mathrm{d} }{\mathrm{d} s} (\rho \boldsymbol{X}^* \boldsymbol{\cdot} \boldsymbol{t}) \boldsymbol{n} + \kappa_x (\boldsymbol{X}^* \boldsymbol{\cdot} \boldsymbol{t}) \boldsymbol{t}. \end{array}\right\} \end{equation}
  3. (3) First principal curvature. The first principal curvature operators are

    (B4)\begin{equation} \left.\begin{array}{c@{}} \kappa_x(\boldsymbol{X}) \triangleq - \dfrac{\mathrm{d} ^2 \boldsymbol{X}}{\mathrm{d} s^2} \boldsymbol{\cdot} \boldsymbol{n}, \\ \kappa_x'[\tilde{\boldsymbol{X}}] \triangleq -\left( 2 \kappa_x \dfrac{\mathrm{d}\kern0.7pt \tilde{\boldsymbol{X}}}{\mathrm{d} s} \boldsymbol{\cdot} \boldsymbol{t} + \dfrac{\mathrm{d} ^2 \tilde{\boldsymbol{X}}}{\mathrm{d} s^2} \boldsymbol{\cdot} \boldsymbol{n}\right), \\ \kappa^*_x[X^*] \triangleq \left( \dfrac{1}{\rho} \dfrac{\mathrm{d} }{\mathrm{d} s} (\rho \kappa_x X^*) - \dfrac{\kappa_x}{\rho} \dfrac{\mathrm{d} }{\mathrm{d} s} (\rho X^*) \right) \boldsymbol{t} - \left( \dfrac{1}{\rho} \dfrac{\mathrm{d} ^2 }{\mathrm{d} s^2} (\rho X^*) + \kappa_x^2 X^* \right) \boldsymbol{n}. \end{array}\right\} \end{equation}
  4. (4) Second principal curvature. The second principal curvature operators are

    (B5)\begin{equation} \left.\begin{array}{c@{}} \kappa_\phi(\boldsymbol{X}) \triangleq \dfrac{n_\rho}{\rho}, \\ \kappa_\phi'[\tilde{\boldsymbol{X}}] \triangleq \dfrac{n_x}{n_\rho} \kappa_\phi (\kappa_x - \kappa_\phi) \tilde{X}_t - \dfrac{n_x}{n_\rho} \kappa_\phi \dfrac{\mathrm{d} \tilde{X}_n}{\mathrm{d} s} - \kappa_\phi^2 \tilde{X}_n, \\ \kappa^*_\phi[X^*] \triangleq n_x \dfrac{\kappa_x - \kappa_\phi}{\rho} X^* \boldsymbol{t} + \left( \dfrac{1}{\rho J} \dfrac{\mathrm{d} }{\mathrm{d} s} (n_x X^*) - \kappa_\phi^2 X^* \right) \boldsymbol{n}. \end{array}\right\} \end{equation}

Finally, the additive curvature is given as the sum of the two principal curvatures. For a derivation of the principal curvatures in axisymmetric coordinates see Pozrikidis (Reference Pozrikidis2011, § 1.8.7).

The adjoint operators given above are not valid for all inputs because the limits as $\rho \to 0$ are not always well-defined. If the inputs are also part of an axisymmetric problem, as is the case of the resulting gradient in lemma 3.4, the limits are all well-defined and bounded by a simple application of L’Hôpital's rule. However, numerically, care must still be taken to ensure a robust result. In our case, this is handled by the finite element method presented in § 4, since the area element also contains a $\rho$ factor.

Appendix C. Derivation of adjoint equations

In this appendix we will follow the Lagrangian formalism to derive the adjoint-based gradients for (P1) and (P2).

C.1. Static problem

For (P1), the problem reduces to finding the shape derivative of the Lagrangian functional (3.6)

(C1)\begin{align} \mathscr{L}_1 &\triangleq \mathscr{J}_1 + \int_{\varOmega_+} \boldsymbol{\nabla} \boldsymbol{u}^*_+ \boldsymbol{\cdot} \sigma_+ \, \textrm{d} V - \int_{\varOmega_+} p^*_+ (\boldsymbol{\nabla} \boldsymbol{\cdot} \boldsymbol{u}_+)\, \textrm{d} V + \frac{1}{\textit{Ca}} \int_\varSigma \kappa \langle \boldsymbol{u}^* \rangle \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S \nonumber\\ &\quad + \int_{\varOmega_-} \lambda \boldsymbol{\nabla} \boldsymbol{u}^*_- \boldsymbol{\cdot} \sigma_- \, \textrm{d} V - \int_{\varOmega_-} \lambda p^*_- (\boldsymbol{\nabla} \boldsymbol{\cdot} \boldsymbol{u}_-)\, \textrm{d} V \nonumber\\ &\quad + \int_\varSigma \langle \,\boldsymbol{f} \rangle _\lambda \boldsymbol{\cdot} [\![ \boldsymbol{u}^* ]\!] \, {\rm d} S + \int_\varSigma \langle \,\boldsymbol{f}^* \rangle _\lambda \boldsymbol{\cdot} [\![ \boldsymbol{u} ]\!] \, {\rm d} S. \end{align}

The admissible variations in this case are those that satisfy the assumptions in § 2.2. Additional assumptions can be added if specializing to the axisymmetric case. Applying lemma 2.1 to each term in the Lagrangian, we obtain

(C2)\begin{align} D \mathscr{L}_1[\tilde{\boldsymbol{X}}] &= \int_\varSigma j (\boldsymbol{u} \boldsymbol{\cdot} \boldsymbol{n}') \,{\rm d}S + \int_\varSigma \left(j ( \boldsymbol{n} \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{u}) \boldsymbol{\cdot} \boldsymbol{n} + \frac{\kappa}{2} \|j\|^2 \right) \tilde{\boldsymbol{X}} \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S\nonumber\\ &\quad - \int_\varSigma [\![ \boldsymbol{\nabla} \boldsymbol{u}^* \boldsymbol{\cdot} \sigma ]\!] _\lambda \tilde{\boldsymbol{X}} \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S + \int_\varSigma [\![ p^* (\boldsymbol{\nabla} \boldsymbol{\cdot} \boldsymbol{u}) ]\!] _\lambda \tilde{\boldsymbol{X}} \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S \nonumber\\ &\quad + \frac{1}{\textit{Ca}} \int_\varSigma \{ \kappa' \langle \boldsymbol{u}^* \rangle \boldsymbol{\cdot} \boldsymbol{n} + \kappa \langle \boldsymbol{u}^* \rangle \boldsymbol{\cdot} \boldsymbol{n}' \} \, {\rm d} S \nonumber\\ &\quad + \frac{1}{\textit{Ca}} \int_\varSigma \{ \kappa \boldsymbol{n} \boldsymbol{\cdot} \langle \boldsymbol{n} \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{u}^* \rangle + \kappa^2 \langle \boldsymbol{u}^* \rangle \boldsymbol{\cdot} \boldsymbol{n}) \} \tilde{\boldsymbol{X}} \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S \nonumber\\ &\quad + \int_\varSigma \langle \,\boldsymbol{f} \rangle _\lambda \boldsymbol{\cdot} [\![ \boldsymbol{n} \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{u}^* ]\!] \tilde{\boldsymbol{X}} \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S + \int_\varSigma \langle \,\boldsymbol{f}^* \rangle _\lambda \boldsymbol{\cdot} [\![ \boldsymbol{n} \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{u} ]\!] \tilde{\boldsymbol{X}} \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S, \end{align}

where $j \triangleq (\boldsymbol {u} \boldsymbol {\cdot } \boldsymbol {n} - u_d)$. First, we note that the velocity field is incompressible in both fluids, so we have that

(C3)\begin{equation} \int_\varSigma [\![ p^* (\boldsymbol{\nabla} \boldsymbol{\cdot} \boldsymbol{u}) ]\!] _\lambda (\tilde{\boldsymbol{X}} \boldsymbol{\cdot} \boldsymbol{n}) \,{\rm d}S = 0. \end{equation}

While most terms above are direct applications of lemma 2.1, the last two terms have been greatly simplified. Applying the lemma directly to the last term gives

(C4)\begin{align} \dfrac{\partial }{\partial \boldsymbol{X}} \int_\varSigma \langle \boldsymbol{n} \boldsymbol{\cdot} \sigma^* \rangle _\lambda \boldsymbol{\cdot} [\![ \boldsymbol{u} ]\!] \, {\rm d} S &= \int_\varSigma \langle \boldsymbol{n}' \boldsymbol{\cdot} \sigma^* \rangle _\lambda \boldsymbol{\cdot} [\![ \boldsymbol{u} ]\!] \, {\rm d} S \nonumber\\ &\quad +\int_\varSigma (\boldsymbol{\nabla} ( \langle \,\boldsymbol{f}^* \rangle _\lambda \boldsymbol{\cdot} [\![ \boldsymbol{u} ]\!] ) \boldsymbol{\cdot} \boldsymbol{n}) \tilde{\boldsymbol{X}} \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S \nonumber\\ &\quad+ \int_\varSigma (\kappa \langle \,\boldsymbol{f}^* \rangle _\lambda \boldsymbol{\cdot} [\![ \boldsymbol{u} ]\!] ) \tilde{\boldsymbol{X}} \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S. \end{align}

However, we know that at a stationary point of the Lagrangian, $[\![ \boldsymbol {u} ]\!] = 0$, so the first and last terms will vanish. For the second term a simple application of the product rule gives

(C5)\begin{equation} \boldsymbol{\nabla} ( \langle \,\boldsymbol{f}^* \rangle _\lambda \boldsymbol{\cdot} [\![ \boldsymbol{u} ]\!] ) \boldsymbol{\cdot} \boldsymbol{n} = \langle \boldsymbol{n} \boldsymbol{\cdot} \boldsymbol{\nabla} \sigma^* \boldsymbol{n} \rangle _\lambda \boldsymbol{\cdot} [\![ \boldsymbol{u} ]\!] + \langle \,\boldsymbol{f}^* \rangle _\lambda \boldsymbol{\cdot} [\![ \boldsymbol{n} \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{u} ]\!], \end{equation}

where the first term again cancels, leaving the desired result. Finally, we use the definition of the stress tensor to write

(C6)\begin{equation} [\![ \boldsymbol{\nabla} \boldsymbol{u}^* \boldsymbol{\cdot} \sigma ]\!] _\lambda = 2 [\![ \varepsilon^* \boldsymbol{\cdot} \varepsilon ]\!] _\lambda. \end{equation}

To obtain the shape gradient from theorem 3.5, we use the adjoint operators for the normal and curvature (defined in appendix B) by writing

(C7)\begin{equation} \int_\varSigma ( \kappa' \langle \boldsymbol{u}^* \rangle \boldsymbol{\cdot} \boldsymbol{n} + \kappa \langle \boldsymbol{u}^* \rangle \boldsymbol{\cdot} \boldsymbol{n}') \,{\rm d}S = \int_\varSigma ( \kappa^*[ \langle \boldsymbol{u}^* \rangle \boldsymbol{\cdot} \boldsymbol{n}] + \boldsymbol{n}^*[\kappa \langle \boldsymbol{u}^* \rangle ] ) \boldsymbol{\cdot} \tilde{\boldsymbol{X}} \, {\rm d} S, \end{equation}

and gather all the terms. We note that $[\![ \boldsymbol {u}^* ]\!] = 0$ and $[\![ (\boldsymbol {n} \boldsymbol {\cdot } \boldsymbol {\nabla } \boldsymbol {u}^*) \boldsymbol {\cdot } \boldsymbol {n} ]\!] = 0$, so the averages around these terms can be dropped.

C.2. Quasi-static problem

We will now look at (P2). We can see from the definition of the Lagrangian (3.19) that we can write

(C8)\begin{align} \mathscr{L}_2 &= \mathscr{J}_2 - \int_0^T \langle \boldsymbol{z}^*, \mathscr{E}_1(\boldsymbol{w}, \boldsymbol{X}) \rangle \, \textrm{d} t \nonumber\\ &\quad - \int_0^T \int_\varSigma \boldsymbol{X}^* \boldsymbol{\cdot} (\boldsymbol{X}_t - \boldsymbol{V}(\boldsymbol{u}, \boldsymbol{X})) \,{\rm d}S \, \textrm{d} t - \int_\varSigma \boldsymbol{X}^*_0 \boldsymbol{\cdot} (\boldsymbol{X}(0) - \boldsymbol{X}_0) \,{\rm d}S, \end{align}

where the main part of the Lagrangian is composed out of the constraints of the previous optimization problem. Therefore, we will focus on the shape derivatives of the remaining terms. We have that

(C9)\begin{align} D \mathscr{L}_2[\tilde{\boldsymbol{X}}] &= D \mathscr{J}_2[\tilde{\boldsymbol{X}}] - \int_0^T D \langle \boldsymbol{z}^*, \mathscr{E}_1 \rangle [\tilde{\boldsymbol{X}}] \, \textrm{d} t \nonumber\\ &\quad - \int_0^T \int_\varSigma \boldsymbol{X}^* \boldsymbol{\cdot} (\tilde{\boldsymbol{X}}_t - \boldsymbol{V}'[\tilde{\boldsymbol{X}}]) \,{\rm d}S \, \textrm{d} t \nonumber\\ &\quad + \int_0^T \int_\varSigma (\boldsymbol{X}^* \boldsymbol{\cdot} (\boldsymbol{n} \boldsymbol{\cdot} \boldsymbol{\nabla} \boldsymbol{V})) \tilde{\boldsymbol{X}} \boldsymbol{\cdot} \boldsymbol{n} \, {\rm d} S \, \textrm{d} t \nonumber\\ &\quad- \int_\varSigma \boldsymbol{X}^*_0 \boldsymbol{\cdot} \tilde{\boldsymbol{X}}(0) \,{\rm d}S. \end{align}

We can now use the surface Reynolds transport theorem from theorem A.3, to obtain

(C10)\begin{align} \int_0^T \int_\varSigma \boldsymbol{X}^* \boldsymbol{\cdot} \tilde{\boldsymbol{X}}_t \, {\rm d} S \, \textrm{d} t& = \left.\int_\varSigma \boldsymbol{X}^* \boldsymbol{\cdot} \tilde{\boldsymbol{X}} \, {\rm d} S\right|_{t = 0}^{t = T} -\int_0^T \int_\varSigma \boldsymbol{X}^*_t \boldsymbol{\cdot} \tilde{\boldsymbol{X}} \, {\rm d} S \, \textrm{d} t \nonumber\\ &\quad - \int_0^T \int_\varSigma (\nabla_\varSigma \boldsymbol{\cdot} \boldsymbol{V}) \boldsymbol{X}^* \boldsymbol{\cdot} \tilde{\boldsymbol{X}} \, {\rm d} S \, \textrm{d} t, \end{align}

where the velocity of the interface is given by $\boldsymbol {V}(\boldsymbol {u}, \boldsymbol {X})$ in (2.8). Gathering the terms gives the evolution equation from lemma 3.7.

We note that for the Lagrangian to be shape differentiable and the manipulations above to be valid, we require that $\boldsymbol {V}$ is shape differentiable. Following the same considerations as those preceding (2.16), we note that

(C11)\begin{equation} \boldsymbol{V} \triangleq \boldsymbol{V}(\boldsymbol{u} \boldsymbol{\cdot} \boldsymbol{n}, \boldsymbol{X}), \end{equation}

i.e. the forcing must also only depend on the normal component of the velocity field.

Appendix D. Axisymmetric Stokes kernels

In this appendix, we will give a listing of all the kernels described in § 4.1 in the axisymmetric setting. Under the axisymmetric assumptions, a vector $\boldsymbol {u}$ is given by the components $(u_x(x, \rho ), u_\rho (x, \rho ), 0)$, i.e. it does not depend on the azimuthal coordinate $\phi$. Rotated to the Cartesian coordinate system, we have that

(D1)\begin{equation} \boldsymbol{u} = (u_x, u_\rho \cos \theta, u_\rho \sin \theta), \end{equation}

where $\theta = \tan ^{-1} (\rho / x)$. To determine the axisymmetric kernels, we introduce this form into the full three-dimensional formulae.

We also introduce the following integral (Pozrikidis Reference Pozrikidis1992, (2.4.7)), for $\boldsymbol {x}_0 \triangleq (x_0, \rho _0, 0)$ and $\boldsymbol {x} \triangleq (x, \rho , \phi )$:

(D2)\begin{equation} I_{mn}(\boldsymbol{x}_0, \boldsymbol{x}) = \int_0^{2 {\rm \pi}} \frac{\cos^n \phi}{r^m} \, \textrm{d} \phi = \frac{4 k^m}{(\sqrt{4 \rho \rho_0})^m} \int_{0}^{{\rm \pi} / 2} \frac{(2 \cos^2 \phi - 1)^n}{(\sqrt{1 - k^2 \cos \phi})^{m}} \, \textrm{d} \phi, \end{equation}

where

(D3)\begin{equation} k^2 = \frac{4 \rho \rho_0}{(x - x_0)^2 + (\rho + \rho_0)^2}. \end{equation}

These integrals can be expressed analytically in terms of complete elliptic integrals of the first and second kind

(D4a,b)\begin{equation} F(k) = \int_0^{{\rm \pi} / 2} \frac{1}{\sqrt{1 - k^2 \cos^2 \phi}} \, \textrm{d} \phi, \quad E(k) = \int_0^{{\rm \pi} / 2} \sqrt{1 - k^2 \cos^2 \phi} \, \textrm{d} \phi. \end{equation}

We have the following standard kernels .

  1. (1) The velocity kernel (from (4.1)) is

    (D5)\begin{equation} \left.\begin{array}{c@{}} {\mathsf{G}}_{xx} = \rho (I_{10} + (x - x_0)^2 I_{30}), \\ {\mathsf{G}}_{\rho x} = \rho (x - x_0) (\rho I_{30} - \rho_0 I_{31}), \\ {\mathsf{G}}_{x\rho} = \rho (x - x_0) (\rho I_{31} - \rho_0 I_{30}), \\ {\mathsf{G}}_{\rho \rho} = \rho (I_{11} + (\rho^2 + \rho_0^2) I_{31} - \rho \rho_0 (I_{30} + I_{32})). \end{array}\right\} \end{equation}
  2. (2) The pressure kernel (from (4.2)) is

    (D6)\begin{equation} \left.\begin{array}{c@{}} P_x = 2 \rho (x - x_0) I_{30}, \\ P_\rho = 2 \rho (\rho I_{30} - \rho_0 I_{31}). \end{array}\right\} \end{equation}
  3. (3) The stress kernel (from (4.3)) is

    (D7)\begin{equation} \left.\begin{array}{c@{}} {\mathsf{T}}_{xxx} = -6 \rho (x - x_0)^3 I_{50}, \\ {\mathsf{T}}_{xx\rho} = -6 \rho (x - x_0)^2 (\rho I_{51} - \rho_0 I_{50}), \\ {\mathsf{T}}_{x\rho x} = -6 \rho (x - x_0)^2 (\rho I_{50} - \rho_0 I_{51}), \\ {\mathsf{T}}_{x\rho\rho} = -6 \rho (x - x_0) ((\rho^2 + \rho_0^2) I_{51} - \rho \rho_0 (I_{50} + I_{52})), \\ {\mathsf{T}}_{\rho x x} = -6 \rho (x - x_0)^2 (\rho I_{51} - \rho_0 I_{50}), \\ {\mathsf{T}}_{\rho x \rho} = -6 \rho (x - x_0) (\rho^2 I_{52} + \rho_0^2 I_{50} - 2 \rho \rho_0 I_{51}), \\ {\mathsf{T}}_{\rho \rho x} = -6 \rho (x - x_0) ((\rho^2 + \rho_0^2) I_{51} - \rho \rho_0 (I_{50} - I_{52})), \\ {\mathsf{T}}_{\rho \rho\rho} = -6 \rho (\rho^3 I_{52} - \rho^2 \rho_0 (I_{53} + 2 I_{51}) + \rho \rho_0^2 (I_{50} + 2 I_{52}) - \rho_0^3 I_{51}). \end{array}\right\} \end{equation}
  4. (4) The velocity gradient kernel (from (4.6)) is

    (D8)\begin{equation} \left.\begin{array}{c@{}} {\mathsf{U}}_{xxx} = {\mathsf{T}}_{xxx} / 2 - \rho (x - x_0) I_{30}, \\ {\mathsf{U}}_{xx\rho} = {\mathsf{T}}_{xx\rho} / 2 + \rho (\rho I_{31} - \rho_0 I_{30}), \\ {\mathsf{U}}_{x\rho x} = {\mathsf{T}}_{x\rho x} / 2 - \rho (\rho I_{30} - \rho_0 I_{31}), \\ {\mathsf{U}}_{x\rho \rho} = {\mathsf{T}}_{x\rho\rho} / 2 - \rho (x - x_0) I_{31}, \\ {\mathsf{U}}_{\rho x x} = {\mathsf{T}}_{\rho xx} / 2 - \rho (\rho I_{30} - \rho_0 I_{31}), \\ {\mathsf{U}}_{\rho x\rho} = {\mathsf{T}}_{\rho x\rho} / 2 - \rho (x - x_0) I_{30}, \\ {\mathsf{U}}_{\rho \rho x} = {\mathsf{T}}_{\rho\rho x} / 2 + \rho (x - x_0) I_{31}\\ {\mathsf{U}}_{\rho \rho \rho} = {\mathsf{T}}_{\rho\rho\rho} / 2 - \rho (\rho I_{30} - \rho_0 I_{31}). \\ \end{array}\right\} \end{equation}

The velocity and stress kernels are also provided in Pozrikidis (Reference Pozrikidis1990, appendix B). Additionally, we have the following kernels used in singularity subtractions.

  1. (5) The pressure (from (4.12)) is

    (D9)\begin{equation} \tilde{p}_\alpha(\boldsymbol{x}, \boldsymbol{y}) = R_{\alpha\beta\gamma}(\boldsymbol{x}, \boldsymbol{y}) n_\beta(\,\boldsymbol{y}) n_\gamma(\boldsymbol{x}), \end{equation}
    where
    (D10a,b)\begin{equation} \left.\begin{array}{c@{}} {\mathsf{R}}_{xxx} = 2 \rho (x - x_0) I_{30}, \\ {\mathsf{R}}_{xx\rho} = 2 \rho (\rho_0 I_{30} - \rho I_{31}), \\ {\mathsf{R}}_{x\rho x} = 2 \rho (\rho I_{30} - \rho_0 I_{31}), \\ {\mathsf{R}}_{x\rho \rho} = 2 \rho (x - x_0) I_{31},\end{array}\right\} \quad \left.\begin{array}{c@{}} {\mathsf{R}}_{\rho x x} = -{\mathsf{R}}_{xx\rho}, \\ {\mathsf{R}}_{\rho x \rho} = {\mathsf{R}}_{xxx}, \\ {\mathsf{R}}_{\rho \rho x} = -{\mathsf{R}}_{x\rho\rho}, \\ {\mathsf{R}}_{\rho\rho\rho} = {\mathsf{R}}_{x\rho x}. \end{array}\right\} \end{equation}

Finally, we have the following expressions for the integrals $I_{mn}$ used in the above kernel definitions. On the left, we have the expressions for $\rho _0 > 0$ and on the right a Taylor expansion for $\rho _0 \to 0$, where $r = \|\boldsymbol {x} - \boldsymbol {y}\|$.

  1. (1) First order

    (D11a,b)\begin{equation} \left.\begin{array}{c@{}} I_{10} = \dfrac{4}{r} F, \\ I_{11} = \dfrac{4}{r k^2} ((2 - k^2) F - 2 E), \end{array}\right\} \quad \left.\begin{array}{c@{}} I_{10} \approx \dfrac{2 {\rm \pi}}{r}, \\ I_{11} \approx 0. \end{array}\right\} \end{equation}
  2. (2) Third order

    (D12a,b)\begin{equation} \left.\begin{array}{c@{}} I_{30} = \dfrac{4}{r^3} \dfrac{1}{1 - k^2} E, \\ I_{31} = \dfrac{4}{r^3 k^2} \left(\left(\dfrac{1}{1 - k^2} + 1\right) E - 2 F\right), \\ I_{32} = \dfrac{4}{r^3 k^4} \left(\left(\dfrac{1}{1 - k^2} + (7 - k^2)\right) E - 4 (2 - k^2) F\right), \end{array}\right\} \quad \left.\begin{array}{c@{}} I_{30} \approx \dfrac{2 {\rm \pi}}{r^3}, \\ I_{31} \approx 0, \\ I_{32} \approx \dfrac{\rm \pi}{r^3}. \end{array}\right\} \end{equation}
  3. (3) Fifth order

    (D13)\begin{equation} \left.\begin{array}{c@{}} I_{50} = \dfrac{4}{3 r^5} \dfrac{ 2 (2 - k^2) J_{30} - J_{10}}{1 - k^2}, \\ I_{51} = \dfrac{4}{3 r^5} \dfrac{ 2 (1 - k^2 + k^4) J_{30} - (2 - k^2) J_{10}}{k^2 (1 - k^2)}, \\ I_{52} = \dfrac{4}{3 r^5} \dfrac{ 2 (6 k^2 - k^6 - 4) J_{30} - (k^4 + 8 k^2 - 8) J_{10}}{k^4 (1 - k^2)}, \\ I_{53} = \dfrac{4}{r^5} \dfrac{ 2 (k^8 + k^6 - 33 k^4 + 64 k^2 - 32) J_{30} - (96 k^2 - 30 k^4 - k^6 - 64) J_{10}}{k^6 (1 - k^2)}, \end{array}\right\} \end{equation}
    where
    (D14a,b)\begin{equation} J_{10} = \frac{r}{4} I_{10} = F,\quad J_{30} = \frac{r^3}{4} I_{30} = \frac{E}{1 - k^2}, \end{equation}
    with the limits at the poles
    (D15ad)\begin{equation} I_{50} \approx \frac{2 {\rm \pi}}{r^5}, \quad I_{51} \approx 0,\quad I_{52} \approx \frac{\rm \pi}{r^5} \quad \text{and} \quad I_{53} \approx 0. \end{equation}

Appendix E. Stokes solver validation

We briefly validate our Stokes solver, described in § 4.2.

E.1. Test 1: Stokeslet nullspace

As a first test, we check the nullspace of the Stokeslet in the velocity layer-potential (4.1). The equation

(E1)\begin{equation} \mathcal{G}_i[\boldsymbol{n}](\boldsymbol{x}) = 0, \end{equation}

is valid on any geometry $\varSigma$ and results from integrating the continuity equation. For testing purposes, we choose the geometry (5.10), with $a = 3, b = 1$ and $k = 4$. Following the notation from § 4, we use $N = 4$ collocation points in each element, $P = 8$ quadrature points and $P_s = 16$ Alpert quadrature nodes of order four in elements containing the singularity, as in table 1.

The geometry and pointwise errors can be seen in figure 11. The orders of convergence for increasingly refined grids can be seen in table 4. We obtain the desired error of convergence to reasonable accuracy for a non-trivial geometry. The results are comparable to the ones obtained in Veerapaneni et al. (Reference Veerapaneni, Gueyffier, Biros and Zorin2009), where a more accurate spectral representation was used for the density.

Figure 11. Test 1: (a) test geometry and (b) $\log _{10}$ pointwise errors in the $x$ component of the velocity layer potential.

Table 4. Test 1: relative $L^2$ errors in $\mathcal {G}_i[\boldsymbol {n}]$ for multiple element counts $M$.

E.2. Test 2: drag on a fluid droplet

In this second example, we will test the solution to (4.5) and the surface traction. For this purpose, we will use the analytic Hadamard–Rybczynski solution for a spherical fluid–fluid droplet under the effects of constant surface tension, which can be found in Clift, Grace & Weber (Reference Clift, Grace and Weber2005, chapter 3). The drag on the droplet is obtained by integrating the $x$ component of the traction along the surface, i.e.

(E2)\begin{equation} C_{D, \pm} \triangleq 2 {\rm \pi}\int_{\varGamma} (\boldsymbol{e}_x \boldsymbol{\cdot} \boldsymbol{f}_\pm) \rho \, \textrm{d} s. \end{equation}

We compare the approximation of the above integral with the analytic results

(E3)\begin{equation} \left.\begin{array}{c@{}} C_{D, +} = 2 {\rm \pi}\dfrac{2 + 3 \lambda}{1 + \lambda}, \\ C_{D, -} = 0. \end{array}\right\} \end{equation}

We choose the same discretization parameters as before, with a viscosity ratio of $\lambda = 5$ and capillary number of $\textit {Ca} = 0.01$. The resulting relative errors can be observed in table 5.

Table 5. Test 2: relative errors in the drag around a spherical fluid droplet (Hadamard–Rybczynski).

E.3. Test 3: droplet deformation

Finally, we perform a test for the evolution of the droplet described in (2.8). This test is based on the work of Stone & Leal (Reference Stone and Leal1989), where we will recreate figure 4 in their paper. The test case tracks the deformation of the droplet by looking at

(E4)\begin{equation} D \triangleq \frac{L - B}{L + B}, \end{equation}

where $L$ and $B$ are the distances from the right-most and top-most points on the droplet to the centre of mass (here the origin). Steady state is determined based on the magnitude of the normal velocity, i.e. the simulation is stopped if

(E5)\begin{equation} \|\boldsymbol{u} \boldsymbol{\cdot} \boldsymbol{n}\|_\infty \le \epsilon, \end{equation}

for $\epsilon = 10^{-4}$. The far-field conditions are given by the extensional flow (5.6a,b).

The methods used in our implementation differ significantly from the ones presented in Stone & Leal (Reference Stone and Leal1989). We use a third-order Runge–Kutta time integrator and the tangential corrections proposed in Loewenberg & Hinch (Reference Loewenberg and Hinch1996) to cluster points towards in regions of higher curvature. The time step is computed at each iteration according to (4.15) with a Courant number of $\theta = 0.1$. We use a coarse grid with only $M = 16$ elements and the same number of collocation and quadrature points per element as reported in table 1.

The droplet deformation is shown in figure 12. We can see that our results match those obtained by Stone & Leal (Reference Stone and Leal1989) and also agree very well with the asymptotic expansions for small $\textit {Ca}$. Finally, in figure 13, we report the relative volume loss

(E6)\begin{equation} {\rm \Delta} V \triangleq \frac{|V(t) - V(0)|}{V(0)}. \end{equation}

Figure 12. Test 3: (a) deformed droplet versus $\textit {Ca}$ and (b) the droplet dilatation for $\lambda = 1$.

Figure 13. Test 3: relative change in volume of the droplet up to steady state, with increasing loss for increasing $\textit {Ca}$.

In Stone & Leal (Reference Stone and Leal1989), the authors report a loss of $1\,\%$ to $10\,\%$ of the volume of the droplet over the span of $600\text {--}3000$ time steps. In our implementation, we can see a relative volume loss of $\lesssim 10^{-5}$ after $200\text {--}3000$ iterations.

References

REFERENCES

Aliabadi, M.H. 2002 The Boundary Element Method, Applications in Solids and Structures. John Wiley and Sons.Google Scholar
Allaire, G., Jouve, F. & van Goethem, N. 2011 Damage evolution in brittle materials by shape and topological sensitivity analysis. J. Comput. Phys. 230, 50105044.CrossRefGoogle Scholar
Allaire, G. & Schoenauer, M. 2007 Conception Optimale de Structures. Springer.Google Scholar
Alouges, F., Desimone, A. & Heltai, L. 2011 Numerical strategies for stroke optimization of axisymmetric microswimmers. Math. Models Meth. Appl. Sci. 21, 361387.CrossRefGoogle Scholar
Alpert, B.K. 1999 Hybrid gauss-trapezoidal quadrature rules. SIAM J. Sci. Comput. 20 (5), 15511584.CrossRefGoogle Scholar
Bernauer, M.K. & Herzog, R. 2011 Optimal control of the classical two-phase Stefan problem in level set formulation. SIAM J. Sci. Comput. 33 (1), 342363.CrossRefGoogle Scholar
Bewley, T.R., Moin, P. & Temam, R. 2001 DNS-based predictive control of turbulence: an optimal benchmark for feedback algorithms. J. Fluid Mech. 447, 179225.CrossRefGoogle Scholar
Bonnet, M., Liu, R. & Veerapaneni, S. 2020 Shape optimization of Stokesian peristaltic pumps using boundary integral methods. Adv. Comput. Math. 46, 124.CrossRefGoogle Scholar
Carley, M. 2006 Numerical quadratures for singular and hypersingular integrals in boundary element methods. SIAM J. Sci. Comput. 29 (3), 12071216.CrossRefGoogle Scholar
Castro, C., Lozano, C., Palacios, F. & Zuazua, E. 2007 Systematic continuous adjoint approach to viscous aerodynamic design on unstructured grids. AIAA J. 45 (9), 21252139.CrossRefGoogle Scholar
Clift, R., Grace, J.R. & Weber, M.E. 2005 Bubbles, Drops, and Particles. Courier Corporation.Google Scholar
Delfour, M.C. & Zolésio, J.-P. 2001 Shapes and Geometries: Metrics, Analysis, Differential Calculus, and Optimization. SIAM.Google Scholar
Deng, Y., Liu, Z. & Wu, Y. 2017 Topology optimization of capillary, two-phase flow problems. Commun. Comput. Phys. 22 (5), 14131438.CrossRefGoogle Scholar
Denisova, I.V. & Solonnikov, V.A. 1991 Hölder spaces solvability of a model initial boundary value problem generated by a problem on a motion of two fluids. Zapiski Nauchnykh Seminarov POMI 188, 544.Google Scholar
Denner, F. & van Wachem, B.G.M. 2015 Numerical time-step restrictions as a result of capillary waves. J. Comput. Phys. 245, 2440.CrossRefGoogle Scholar
Desjardins, O., Moureau, V. & Pitsch, H. 2008 An accurate conservative level set/ghost fluid method for simulating turbulent atomization. J. Comput. Phys. 227 (18), 83958416.CrossRefGoogle Scholar
Dziuk, G. & Elliott, C.M. 2007 Finite elements on evolving surfaces. IMA J. Numer. Anal. 27, 262292.CrossRefGoogle Scholar
Eilers, P.H.C. 2003 A perfect smoother. Anal. Chem. 75 (14), 36313636.CrossRefGoogle ScholarPubMed
Feppon, F., Allaire, G., Bordeu, F., Cortial, J. & Dapogny, C. 2019 Shape optimization of a coupled thermal fluid-structure problem in a level set mesh evolution framework. SeMA J. 76 (3), 413458.CrossRefGoogle Scholar
Fikl, A. 2020 Two-phase Adjoint Stokes Problems for Matlab. Available at: https://gitlab.com/alexfikl/stokes-phase.Google Scholar
Garcke, H., Hinze, M., Kahle, C. & Lam, K.F. 2018 A phase field approach to shape optimization in Navier–Stokes flow with integral state constraints. Adv. Comput. Math. 44 (5), 13451383.CrossRefGoogle Scholar
Giacomini, M., Pantz, O. & Trabelsi, K. 2017 Volumetric expressions of the shape gradient of the compliance in structural shape optimization. arXiv:1701.05762.Google Scholar
Hager, W.W. & Zhang, H. 2006 A survey of nonlinear conjugate gradient methods. Pac. J. Optim. 2 (1), 3558.Google Scholar
Hou, T.Y., Lowengrub, J.S. & Shelley, M.J. 1994 Removing the stiffness from interfacial flows with surface tension. J. Comput. Phys. 114 (2), 312338.CrossRefGoogle Scholar
Hua, J., Kumar, K., Khoo, B.C. & Xue, H. 2002 A numerical study of the interaction of water spray with a fire plume. Fire Safety J. 37 (7), 631657.CrossRefGoogle Scholar
Jameson, A. 1988 Aerodynamic design via control theory. J. Sci. Comput. 3 (3), 233260.CrossRefGoogle Scholar
Kolm, P. & Rokhlin, V. 2001 Numerical quadratures for singular and hypersingular integrals. Comput. Maths Applics. 41 (3), 327352.CrossRefGoogle Scholar
Kress, R. 1989 Linear Integral Equations. Springer.CrossRefGoogle Scholar
Kunisch, K. & Lu, X. 2011 Optimal control for multiphase fluid Stokes problems. Nonlinear Anal. Theory Meth. Applics. 74 (2), 585599.CrossRefGoogle Scholar
Lac, E. & Homsy, G.M. 2007 Axisymmetric deformation and stability of a viscous drop in a steady electric field. J. Fluid Mech. 590, 239264.CrossRefGoogle Scholar
Ladyzhenskaya, O.A. 1963 The Mathematical Theory of Viscous Incompressible Flow. Gordon and Breach.Google Scholar
Loewenberg, M. & Hinch, E.J. 1996 Numerical simulation of a concentrated emulsion in shear flow. J. Fluid Mech. 321, 395419.CrossRefGoogle Scholar
Luenberger, D.G. 1997 Optimization by Vector Space Methods. John Wiley & Sons.Google Scholar
Michor, P.W. & Mumford, D.B. 2006 Riemannian geometries on spaces of plane curves. J. Eur. Math. Soc. 8 (1), 148.CrossRefGoogle Scholar
Monegato, G. 1994 Numerical evaluation of hypersingular integrals. J. Comput. Appl. Maths 50 (1), 931.CrossRefGoogle Scholar
Moubachir, M. & Zolésio, J.-P. 2006 Moving Shape Analysis and Control: Applications to Fluid Structure Interactions. Chapman and Hall.CrossRefGoogle Scholar
Muradoglu, M. & Tryggvason, G. 2008 A front-tracking method for computation of interfacial flows with soluble surfactants. J. Comput. Phys. 227, 22382262.CrossRefGoogle Scholar
Ojala, R. & Tornberg, A.T. 2015 An accurate integral equation method for simulating multi-phase Stokes flows. J. Comput. Phys. 298, 145160.CrossRefGoogle Scholar
Palacios, F., Alonso, J.J. & Jameson, A. 2012 Shape sensitivity of free-surface interfaces using a level set methodology. In 42nd AIAA Fluid Dynamics Conference and Exhibit.CrossRefGoogle Scholar
Pantz, O. 2005 Sensibilité de l’équation de la chaleur aux sauts de conductivité. C. R. Math. 341 (5), 333337.CrossRefGoogle Scholar
Pau, G.S.H., Almgren, A.S., Bell, J.B. & Lijewski, M.J. 2009 A parallel second-order adaptive mesh algorithm for incompressible flow in porous media. Phil. Trans. R. Soc. 367 (1907), 46334654.CrossRefGoogle ScholarPubMed
Pironneau, O. 1974 On optimum design in fluid mechanics. J. Fluid Mech. 64 (1), 97110.CrossRefGoogle Scholar
Popinet, S. 2009 An accurate adaptive solver for surface-tension-driven interfacial flows. J. Comput. Phys. 228 (16), 58385866.CrossRefGoogle Scholar
Pozrikidis, C. 1990 The instability of a moving viscous drop. J. Fluid Mech. 210, 121.CrossRefGoogle Scholar
Pozrikidis, C. 1992 Boundary Integral and Singularity Methods for Linearized Viscous Flow. Cambridge University Press.CrossRefGoogle Scholar
Pozrikidis, C. 2011 Introduction to Theoretical and Computational Fluid Dynamics. Oxford University Press.Google Scholar
Prosperetti, A. & Tryggvason, G. 2009 Computational Methods for Multiphase Flow. Cambridge University Press.Google Scholar
Pruss, J. & Simonett, G. 2016 Moving Interfaces and Quasilinear Parabolic Evolution Equations, vol. 105. Springer.CrossRefGoogle Scholar
Repke, S., Marheineke, N. & Pinnau, R. 2011 Two adjoint-based optimization approaches for a free surface Stokes flow. SIAM J. Appl. Maths 71 (6), 21682184.CrossRefGoogle Scholar
Ring, W. & Wirth, B. 2012 Optimization methods on Riemannian manifolds and their application to shape space. SIAM J. Optim. 22 (2), 596627.CrossRefGoogle Scholar
Schulz, V.H. 2014 A Riemannian view on shape optimization. Found. Comput. Math. 14 (3), 483501.CrossRefGoogle Scholar
Sellier, M. 2016 Inverse problems in free surface flows: a review. Acta Mech. 227 (3), 913935.CrossRefGoogle Scholar
Stone, H.A. 1994 Dynamics of drop deformation and breakup in viscous fluids. Annu. Rev. Fluid Mech. 26 (1), 65102.CrossRefGoogle Scholar
Stone, H.A. & Leal, L.G. 1989 Relaxation and breakup of an initially extended drop in an otherwise quiescent fluid. J. Fluid Mech. 198, 399427.CrossRefGoogle Scholar
Taylor, G.I. 1934 The formation of emulsions in definable fields of flow. Proc. R. Soc. Lond. 146 (858), 501523.Google Scholar
Temam, R. 2001 Navier–Stokes Equations: Theory and Numerical Analysis. American Mathematical Society.Google Scholar
Veerapaneni, S.K., Gueyffier, D., Biros, G. & Zorin, D. 2009 A numerical method for simulating the dynamics of 3D axisymmetric vesicles suspended in viscous flows. J. Comput. Phys. 228 (19), 72337249.CrossRefGoogle Scholar
Walker, S.W. 2015 The Shape of Things: A Practical Guide to Differential Geometry and the Shape Derivative. SIAM.CrossRefGoogle Scholar
Wei, M. & Freund, J.B. 2006 A noise-controlled free shear flow. J. Fluid Mech. 546, 123152.CrossRefGoogle Scholar
Figure 0

Figure 1. Droplet in axisymmetric coordinates $(x, \rho , \phi )$.

Figure 1

Figure 2. (a) Collocation points (black circles) on two adjacent elements and (b) element subdivisions and quadrature points (grey circles) for a singularity at point $\boldsymbol {x}_i$.

Figure 2

Algorithm 1 Shape optimization algorithm

Figure 3

Table 1. Discretization parameters used in tests 1–6.

Figure 4

Table 2. Test 1: relative errors in finite difference versus adjoint gradients for a finite difference step size $\epsilon = 10^{-6}$.

Figure 5

Figure 3. Test 1: (a) $\log _{10}$ relative errors for finite difference versus adjoint gradient with $M = \{8, 16, 24, 32\}$ elements, where the full line is $O(\epsilon )$; (b) pointwise errors for the finest case with $M = 32$ elements for full range of $\epsilon$.

Figure 6

Figure 4. Test 2: (a) relative errors in finite difference versus adjoint gradient for a fixed panel count $M = 32$, where the full line is $O(\epsilon )$ convergence; (b) relative errors for a fixed finite difference step size $\epsilon = 10^{-5}$ and $M \in \{8, 12, 16, 24, 32\}$, where the full line is $O(h_{{max}}^3)$ convergence.

Figure 7

Table 3. Test 2: relative errors in finite difference versus adjoint gradients. Case 1 is $\textit {Ca} = 0.05$ and $\textit {Ca}_d = 0.1$ (see figure 4b); case 2 is $\textit {Ca} = 0.1$ and $\textit {Ca}_d = 0.05$.

Figure 8

Figure 5. Test 3: (a) geometry and (b) normal velocity. Initial conditions (dashed), desired solution (dotted) and optimal solution ($\bullet$).

Figure 9

Figure 6. Test 3: (a) cost functional and (b) descent directions. Absolute values (full) normalized by first iteration (dashed) and Fletcher–Reeves descent direction (dotted).

Figure 10

Figure 7. Test 4: schematic of Taylor's ‘four roller’ apparatus (not to scale).

Figure 11

Figure 8. Test 4: (a) cost functional and (b) gradient. Normalized by first iteration (dashed) and absolute values (full).

Figure 12

Figure 9. Test 5: (a) cost functional and (b) surface tension along iterations of the optimization algorithm.

Figure 13

Figure 10. Test 5: geometry at the final time $T$ along iterations of the optimization algorithm.

Figure 14

Figure 11. Test 1: (a) test geometry and (b) $\log _{10}$ pointwise errors in the $x$ component of the velocity layer potential.

Figure 15

Table 4. Test 1: relative $L^2$ errors in $\mathcal {G}_i[\boldsymbol {n}]$ for multiple element counts $M$.

Figure 16

Table 5. Test 2: relative errors in the drag around a spherical fluid droplet (Hadamard–Rybczynski).

Figure 17

Figure 12. Test 3: (a) deformed droplet versus $\textit {Ca}$ and (b) the droplet dilatation for $\lambda = 1$.

Figure 18

Figure 13. Test 3: relative change in volume of the droplet up to steady state, with increasing loss for increasing $\textit {Ca}$.