Hostname: page-component-cd9895bd7-dzt6s Total loading time: 0 Render date: 2024-12-22T02:50:05.732Z Has data issue: false hasContentIssue false

A real-time digital twin of azimuthal thermoacoustic instabilities

Published online by Cambridge University Press:  19 December 2024

Andrea Nóvoa
Affiliation:
University of Cambridge Engineering Department, Cambridge CB2 1PZ, UK Aeronautics Department, Imperial College London, London SW7 2AZ, UK
Nicolas Noiray
Affiliation:
Mechanical & Process Engineering Department, ETH Zürich, Zürich 8092, Switzerland
James R. Dawson
Affiliation:
Institute for Energy and Process Engineering, Norwegian University of Science and Technology (NTNU). Strømningsteknisk Kolbjørn Hejesvei 2, Trondheim 7491, Norway
Luca Magri*
Affiliation:
University of Cambridge Engineering Department, Cambridge CB2 1PZ, UK Aeronautics Department, Imperial College London, London SW7 2AZ, UK The Alan Turing Institute, 96 Euston Rd., London NW1 2DB, UK Dipartimento di Ingegneria Meccanica e Aerospaziale, Politecnico di Torino, Corso Duca degli Abruzzi, 24, Torino 10129, Italy
*
Email address for correspondence: l.magri@imperial.ac.uk

Abstract

When they occur, azimuthal thermoacoustic oscillations can detrimentally affect the safe operation of gas turbines and aeroengines. We develop a real-time digital twin of azimuthal thermoacoustics of a hydrogen-based annular combustor. The digital twin seamlessly combines two sources of information about the system: (i) a physics-based low-order model; and (ii) raw and sparse experimental data from microphones, which contain both aleatoric noise and turbulent fluctuations. First, we derive a low-order thermoacoustic model for azimuthal instabilities, which is deterministic. Second, we propose a real-time data assimilation framework to infer the acoustic pressure, the physical parameters, and the model bias and measurement shift simultaneously. This is the bias-regularized ensemble Kalman filter, for which we find an analytical solution that solves the optimization problem. Third, we propose a reservoir computer, which infers both the model bias and measurement shift to close the assimilation equations. Fourth, we propose a real-time digital twin of the azimuthal thermoacoustic dynamics of a laboratory hydrogen-based annular combustor for a variety of equivalence ratios. We find that the real-time digital twin (i) autonomously predicts azimuthal dynamics, in contrast to bias-unregularized methods; (ii) uncovers the physical acoustic pressure from the raw data, i.e. it acts as a physics-based filter; (iii) is a time-varying parameter system, which generalizes existing models that have constant parameters, and capture only slow-varying variables. The digital twin generalizes to all equivalence ratios, which bridges the gap of existing models. This work opens new opportunities for real-time digital twinning of multi-physics problems.

Type
JFM Papers
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press.

1. Introduction

Thermoacoustic instabilities are a multi-physics phenomenon, which is caused by the constructive coupling between hydrodynamics, unsteady heat released by flames and acoustics (e.g. Culick Reference Culick1988; Paschereit, Gutmark & Weisenstein Reference Paschereit, Gutmark and Weisenstein1998; Lieuwen et al. Reference Lieuwen, Torres, Johnson and Zinn2001; Candel et al. Reference Candel, Durox, Ducruix, Birbaud, Noiray and Schuller2009; Poinsot Reference Poinsot2017; Juniper & Sujith Reference Juniper and Sujith2018; Magri, Schmid & Moeck Reference Magri, Schmid and Moeck2023; Silva Reference Silva2023). A thermoacoustic instability can arise when the heat released by the flames is sufficiently in phase with the acoustic pressure (Rayleigh Reference Rayleigh1878; Magri, Juniper & Moeck Reference Magri, Juniper and Moeck2020). If uncontrolled or not prevented, instabilities grow into large-amplitude pressure oscillations, which can detrimentally affect gas turbine operating regimes, cause structural damage and fatigue and, in the worst-case scenario, shake the engine and its components apart (e.g. Candel Reference Candel2002; Dowling & Morgans Reference Dowling and Morgans2005; Culick Reference Culick2006; Lieuwen Reference Lieuwen2012). In aeroengines the flame holders are arranged in annular configurations to increase the power density (e.g. Krebs et al. Reference Krebs, Flohr, Prade and Hoffmann2002). Despite these annular configurations being nominally rotationally symmetric, large-amplitude azimuthal thermoacoustic oscillations can spontaneously occur and break dynamical symmetry (e.g. Morgans & Stow Reference Morgans and Stow2007; Noiray, Bothien & Schuermans Reference Noiray, Bothien and Schuermans2011; Noiray & Schuermans Reference Noiray and Schuermans2013b; Bauerheim, Nicoud & Poinsot Reference Bauerheim, Nicoud and Poinsot2016; Humbert et al. Reference Humbert, Moeck, Paschereit and Orchini2023a; Magri et al. Reference Magri, Schmid and Moeck2023). Thermoacoustic instabilities in annular combustors have intricate dynamics, which can be grouped into (e.g. Magri et al. Reference Magri, Schmid and Moeck2023) (a) spinning, if the nodal lines rotate azimuthally (typical of rotationally symmetric configurations); (b) standing, if the nodal lines have, on average, a fixed orientation (typical of rotationally asymmetric configurations); and (c) mixed, if the nodal lines switch between the two former states (typical of weakly asymmetric configurations) (e.g. Schuermans, Paschereit & Monkewitz Reference Schuermans, Paschereit and Monkewitz2006; Noiray et al. Reference Noiray, Bothien and Schuermans2011; Worth & Dawson Reference Worth and Dawson2013a). Azimuthal instabilities have been investigated by high-fidelity simulations (e.g. Wolf et al. Reference Wolf, Staffelbach, Gicquel, Müller and Poinsot2012), by experimental campaigns in atmospheric and pressurized rigs (e.g. Bourgouin et al. Reference Bourgouin, Durox, Moeck, Schuller and Candel2013; Worth & Dawson Reference Worth and Dawson2013b; Mazur et al. Reference Mazur, Nygård, Dawson and Worth2019, Reference Mazur, Kwah, Indlekofer, Dawson and Worth2021; Ahn et al. Reference Ahn, Indlekofer, Dawson and Worth2022; Indlekofer et al. Reference Indlekofer, Faure-Beaulieu, Dawson and Noiray2022) and heavy-duty gas turbines (Noiray & Schuermans Reference Noiray and Schuermans2013b), and by theoretical studies (e.g. Moeck, Paul & Paschereit Reference Moeck, Paul and Paschereit2010; Noiray et al. Reference Noiray, Bothien and Schuermans2011; Ghirardo & Juniper Reference Ghirardo and Juniper2013; Duran & Morgans Reference Duran and Morgans2015; Bauerheim et al. Reference Bauerheim, Nicoud and Poinsot2016; Laera et al. Reference Laera, Schuller, Prieur, Durox, Camporeale and Candel2017; Mensah et al. Reference Mensah, Magri, Orchini and Moeck2019; Murthy et al. Reference Murthy, Sayadi, Le Chenadec, Schmid and Bodony2019; Faure-Beaulieu & Noiray Reference Faure-Beaulieu and Noiray2020). Because the dynamics of azimuthal thermoacoustics are not yet fully understood (Aguilar Pérez et al. Reference Aguilar Pérez, Dawson, Schuller, Durox, Prieur and Candel2021; Faure-Beaulieu et al. Reference Faure-Beaulieu, Indlekofer, Dawson and Noiray2021b), the understanding, modelling and control of azimuthal oscillations is an active area of research.

Experimental campaigns were performed to gain insight into the physical mechanisms and behaviour of azimuthal instabilities in a prototypical annular combustor with electrically heated gauzes (Moeck et al. Reference Moeck, Paul and Paschereit2010). In a model annular gas turbine combustor with ethylene–air flames, Worth & Dawson (Reference Worth and Dawson2013b) investigated the flame dynamics and heat release, and how it coupled with the acoustics (O'Connor, Worth & Dawson Reference O'Connor, Worth and Dawson2013; Worth & Dawson Reference Worth and Dawson2013a). They found that varying the burner spacing, to promote or suppress flame–flame interactions, resulted in changes to the amplitude and frequency of the azimuthal modes. They also varied the burner swirl directions, which impacted the preferred mode selection. The spontaneous symmetry breaking of thermoacoustic eigenmodes was experimentally analysed in Indlekofer et al. (Reference Indlekofer, Faure-Beaulieu, Dawson and Noiray2022), who found that small imperfections in the rotational symmetry of the annular combustor were magnified at the supercritical Hopf bifurcation point that separated resonant from limit cycle oscillations in the form of a standing mode oriented at an azimuthal angle defined by the rotational asymmetry. The nonlinear dynamics of beating modes was experimentally discovered and modelled by Faure-Beaulieu et al. (Reference Faure-Beaulieu, Indlekofer, Dawson and Noiray2021b). The authors found that for some combinations of small asymmetries of the resistive and reactive components of the thermoacoustic system, purely spinning limit cycles became unstable, and heteroclinic orbits between the corresponding saddle points led to beating oscillations with periodic changes in the spin direction. The previous works focused on statistically stationary regimes. The analysis of slowly varying operating conditions were investigated in Indlekofer et al. (Reference Indlekofer, Faure-Beaulieu, Noiray and Dawson2021), who unravelled dynamic hystereses of the thermoacoustic state of the system. Other experimental and theoretical investigations focused on analysing the intermittent behaviour (Faure-Beaulieu et al. Reference Faure-Beaulieu, Indlekofer, Dawson and Noiray2021a; Roy et al. Reference Roy, Singh, Nair, Chaudhuri and Sujith2021). In the latter work, the solutions of the Fokker–Plank equation, which governs the probability of observing an instantaneous limit cycle state dominated by a standing or spinning mode, enabled the prediction of the first passage time statistics between erratic changes in spin direction.

The intricate linear and nonlinear dynamics of azimuthal thermoacoustic oscillations spurred interest in physics-based low-order modelling and control (e.g. Morgans & Stow Reference Morgans and Stow2007; Illingworth & Morgans Reference Illingworth and Morgans2010; Humbert et al. Reference Humbert, Orchini, Paschereit and Noiray2023b). The azimuthal dynamics can be qualitatively described by a one-dimensional wave-like equation (Noiray et al. Reference Noiray, Bothien and Schuermans2011; Ghirardo & Juniper Reference Ghirardo and Juniper2013; Bauerheim et al. Reference Bauerheim, Parmentier, Salas, Nicoud and Poinsot2014; Bothien, Noiray & Schuermans Reference Bothien, Noiray and Schuermans2015; Yang, Laera & Morgans Reference Yang, Laera and Morgans2019), which can also include a stochastic forcing term to model the effect of the turbulent fluctuations and noise (Noiray & Schuermans Reference Noiray and Schuermans2013a; Orchini et al. Reference Orchini, Magri, Silva, Mensah and Moeck2020). In the frequency domain, azimuthal oscillations were investigated with eigenvalue sensitivity (Magri, Bauerheim & Juniper Reference Magri, Bauerheim and Juniper2016a; Magri et al. Reference Magri, Bauerheim, Nicoud and Juniper2016b; Mensah et al. Reference Mensah, Magri, Orchini and Moeck2019; Orchini et al. Reference Orchini, Magri, Silva, Mensah and Moeck2020). These works concluded that traditional eigenvalue sensitivity analysis needed to be extended to tackle degenerate pairs of azimuthal modes, as reviewed in Magri et al. (Reference Magri, Schmid and Moeck2023). A review of azimuthal thermoacoustic modelling can be found in Bauerheim et al. (Reference Bauerheim, Parmentier, Salas, Nicoud and Poinsot2016). In the time domain, the typical approach is to develop models that describe the acoustic state, which is also referred to as the slow-varying variable approach based on quaternions (e.g. Ghirardo & Bothien Reference Ghirardo and Bothien2018) or as the generalized Bloch sphere representation (Magri et al. Reference Magri, Schmid and Moeck2023). In this approach, all fast-varying dynamics, such as aleatoric noise and turbulent fluctuations, are filtered out of the data and modelled as a stochastic forcing term. The equations model the amplitude of the acoustic pressure envelope, the temporal phase drift and the angles defining rotational and reflectional symmetry breaking (Ghirardo & Bothien Reference Ghirardo and Bothien2018). Faure-Beaulieu & Noiray (Reference Faure-Beaulieu and Noiray2020) introduced the slow-varying variables into an stochastic wave equation, which was averaged in space and time to obtain coupled Langevin equations. The low-order model parameters were estimated via Langevin regression, which is a regression method used in turbulent environments (e.g. Siegert, Friedrich & Peinke Reference Siegert, Friedrich and Peinke1998; Noiray Reference Noiray2017; Boujo et al. Reference Boujo, Bourquard, Xiong and Noiray2020; Callaham et al. Reference Callaham, Loiseau, Rigas and Brunton2021). Slow-varying variable models provide qualitatively accurate representations of azimuthal oscillations. However, they are not suitable for real-time applications, in which we need models that can receive data from sensors as raw inputs. Therefore, this work focuses on models describing the fast-varying quantities, which model the evolution of the modal amplitudes through coupled Van der Pol oscillators (e.g. Noiray et al. Reference Noiray, Bothien and Schuermans2011). Nonetheless, the discussed low-order modelling methods are offline, i.e. they infer the model parameters from the data in a post-processing stage and they identify one parameter at a time (i.e. they do not estimate all the parameters in one computation). This means that the physical parameters of the literature are not necessarily optimal.

On the one hand, physics-based low-order models are qualitatively accurate, but they are quantitatively inaccurate due to modelling assumptions and approximations (Magri & Doan Reference Magri and Doan2020). The low-order models’ state and parameters are affected by both aleatoric uncertainties and epistemic errors, i.e. model biases (Nóvoa, Racca & Magri Reference Nóvoa, Racca and Magri2024). On the other hand, experimental data can provide reliable information about the system, but they are typically noisy and sparse (e.g. Magri & Doan Reference Magri and Doan2020). Experimental measurements can be affected by aleatoric uncertainties due to environmental and instrumental noise, and systematic errors in the sensors. To rigorously combine the two sources of information (low-order modelling and experimental data) to improve the knowledge on the system, data assimilation (DA) comes into play (e.g. Tarantola Reference Tarantola2005; Evensen Reference Evensen2009). Data assimilation for thermoacoustics was introduced with a variational approach (offline) by Traverso & Magri (Reference Traverso and Magri2019) with a real-time (or sequential) approach by Nóvoa & Magri (Reference Nóvoa and Magri2022), who deployed an ensemble square-root Kalman filter to infer the pressure and physical parameters. Sequential DA has also been successfully applied to reacting flows (e.g. Labahn et al. Reference Labahn, Wu, Coriton, Frank and Ihme2019; Yu et al. Reference Yu, Jaravel, Ihme, Juniper and Magri2019a; Donato, Galletti & Parente Reference Donato, Galletti and Parente2024), turbulence modelling (e.g. Colburn, Cessna & Bewley Reference Colburn, Cessna and Bewley2011; Majda & Harlim Reference Majda and Harlim2012; Gao et al. Reference Gao, Wang, Overton, Zupanski and Tu2017; Magri & Doan Reference Magri and Doan2020; Hansen, Brouzet & Ihme Reference Hansen, Brouzet and Ihme2024) and acoustics (e.g. Kolouri, Azimi-Sadjadi & Ziemann Reference Kolouri, Azimi-Sadjadi and Ziemann2013; Wang et al. Reference Wang, He, Deng and Liu2021), among others. Apart from Nóvoa & Magri (Reference Nóvoa and Magri2022), these works implemented classical DA formulations, which assume that the uncertainties are aleatoric and unbiased (e.g. Dee Reference Dee2005; Laloyaux et al. Reference Laloyaux, Bonavita, Dahoui, Farnan, Healy, Hólm and Lang2020). However, as shown in Nóvoa et al. (Reference Nóvoa, Racca and Magri2024), a DA framework provides an optimal state and set of parameters when the model biases are modelled. Model bias estimation in real-time DA is traditionally based on the separate-bias Kalman filter scheme (Friedland Reference Friedland1969). This bias-aware filter augments the dynamical system with a parameterized model for the bias, and then solves two state and parameter estimation problems: one for the physical model and another for the bias model (e.g. Ignagni Reference Ignagni1990; Dee & da Silva Reference Dee and da Silva1998; da Silva & Colonius Reference da Silva and Colonius2020). However, the separate-bias Kalman filter relies on the knowledge of the bias functional form a priori. Also, as explained in Nóvoa et al. (Reference Nóvoa, Racca and Magri2024), the filter is unregularized, which can lead to unrealistically large estimates of the model bias and does not ensure that the bias is unique. In other words, the separate-bias Kalman filter may be ill-posed. To overcome these limitations, Nóvoa et al. (Reference Nóvoa, Racca and Magri2024) proposed a regularized bias-aware ensemble data assimilation framework, for which they derived an analytical solution: the bias-regularized ensemble Kalman filter (r-EnKF). The r-EnKF is a real-time DA method that not only accounts for the model bias, but also regularizes the norm of the bias. The regularization makes the algorithm stable and the solution of the problem unique. In the limit of a perfect model (i.e. zero model bias), the r-EnKF recovers the classical bias-unaware ensemble Kalman filter (EnKF) introduced by Evensen (Reference Evensen2003). The model bias was inferred by an echo state network (ESN), which is a universal approximator of time-varying functions (e.g. Grigoryeva & Ortega Reference Grigoryeva and Ortega2018) that can adaptively estimate the model bias with no major assumption regarding the functional form (Nóvoa et al. Reference Nóvoa, Racca and Magri2024). Echo state networks are suitable for real-time digital twins because their training consists of solving a linear system, which is computationally inexpensive in contrast to the back-propagation methods required in other machine learning methods (e.g. Bonavita & Laloyaux Reference Bonavita and Laloyaux2020; Brajard et al. Reference Brajard, Carrassi, Bocquet and Bertino2020). The r-EnKF of Nóvoa et al. (Reference Nóvoa, Racca and Magri2024) assumed that the measurements were unbiased, which is an assumption that we relax in this paper to assimilate real experimental data.

1.1. Objectives and structure

The objective of this work is to develop a real-time digital twin of azimuthal thermoacoustic instabilities. Figure 3 pictorially describes the proposed digital twin framework. Real-time digital twins are adaptive models, which are designed to predict the behaviour of their physical counterpart by assimilating data from sensors when they become available. We need four ingredients to design a real-time digital twin (i) data from sensors (from noisy and sparse microphones), (ii) a qualitative low-order model, which should capture the fast-varying acoustic dynamics rather than the slow-varying states, (iii) an estimator of both the model bias and measurement shift, (iv) a statistical DA method that optimally combines the data and the model on the fly (in real time) to infer the physical states and parameters. Section 2 describes the available experimental data, as well as the physics-based low-order model used to describe the thermoacoustic problem. The real-time assimilation framework is detailed in § 3. The proposed ESN is described in § 4. Section 6 shows the results of the real-time digital twin, and compares the performance of the bias-unregularized EnKF with the r-EnKF. § 7 ends the paper.

2. Azimuthal thermoacoustic instabilities

We model in real time azimuthal thermoacoustic instabilities in hydrogen-based annular combustors. In this section we describe both the experimental data (§ 2.1) and the physics-based low-order model (§ 2.2).

2.1. Experimental set-up and data

We use the experimental data of Faure-Beaulieu et al. (Reference Faure-Beaulieu, Indlekofer, Dawson and Noiray2021a) and Indlekofer et al. (Reference Indlekofer, Faure-Beaulieu, Dawson and Noiray2022). The experimental set-up (figure 1) consists of an annular combustor with 12 equally spaced burners with premixed flames, which are fuelled with a mixture of 70/30 % ${\rm H}_2/{\rm CH}_4$ by power. The operating conditions are atmospheric and the thermal power is fixed at 72 kW. The equivalence ratios considered are $\varPhi =\{0.4875, 0.5125, 0.5375, 0.5625\}$. When $\varPhi >0.5$, the system is thermoacoustically unstable with self-sustained oscillations that peak at approximately 1090 Hz (Indlekofer et al. Reference Indlekofer, Faure-Beaulieu, Dawson and Noiray2022). The acoustic pressure data was recorded at a sampling rate of 51.2 kHz by Kulite pressure transducers (XCS-093-05D) at four azimuthal locations $\theta =\{0^\circ, 60^\circ, 120^\circ, 240^\circ \}$.

Figure 1. Side and top views of the experimental set-up of the annular combustor. Adapted from Faure-Beaulieu et al. (Reference Faure-Beaulieu, Indlekofer, Dawson and Noiray2021b).

Figure 2 shows the time series, histograms and power spectral density (PSD) of the raw and post-processed experimental data at $\varPhi =0.5125$. The PSD of the post-processed data is obtained after applying a Butterworth band-pass filter to the raw data around the frequency of instability, which approximately isolates the frequencies between 1050 and 1150 Hz (figure 2 III). The pressure measurements have a mean value that is different from zero because of large-scale flow structures, which are not correlated to the thermoacoustic dynamics. The non-zero mean pressure is referred to as ‘measurement shift’, as further explained in § 3.1. In a digital twin that is coupled with sensors’ measurements in real time, the data are the raw pressure signals that are assimilated into the low-order model (figure 2a). Because the raw pressure contains aleatoric noise, measurement shift and turbulent flow fluctuations, we refer to the post-processed data (figure 2b) as the ‘presumed ground truth’ or ‘presumed acoustic state’, that is, the physical acoustic state that is the presumed target prediction.

Figure 2. Pressure at the four measurement locations with equivalence ratio $\varPhi =0.5125$ (thermoacoustically unstable configuration). (a) Raw experimental data, i.e. the observables, and (b) post-processed data, which are briefly referred to as the presumed truth. (I) Time series of the fast-varying oscillations, (II) histogram of the acoustic pressure in the time window for DA, $t\in [0.5, 0.85]$, and (III) comparison of the power spectral density of the raw data (light colour) and the presumed truth (dark colour). The horizontal lines in (II) indicate the mean of the histograms. The thermoacoustic limit cycle is a standing acoustic mode with a nodal line near $\theta =120^\circ$ (mono-modal histogram).

Figure 3. Schematic of the proposed digital twin framework. (a) The physical and digital systems evolve in time (left to right) independently. The digital system is composed of a physical model with uncertain state and parameters, and an estimator of the model bias and innovations (from which the measurement shift is estimated). When sensor data (crosses) become available, they are combined with the estimates from the digital twin using the r-EnKF to update the digital system. (b) Diagram of the r-EnKF update performed sequentially every time that data become available.

2.2. A qualitative and physics-based low-order model

The dynamics of the azimuthal acoustics can be qualitatively modelled by a one-dimensional wave-like equation (e.g. Faure-Beaulieu & Noiray Reference Faure-Beaulieu and Noiray2020; Indlekofer et al. Reference Indlekofer, Faure-Beaulieu, Dawson and Noiray2022)

(2.1)\begin{gather} \dfrac{\partial^2p}{\partial t^2} + \zeta\dfrac{\partial p}{\partial t} - \left[1+\epsilon\cos{\left(2(\theta-\varTheta_\epsilon)\right)}\right]\dfrac{c^2}{r^2}\dfrac{\partial^2p}{\partial \theta^2} = (\gamma - 1) \dfrac{\partial \dot{q}}{\partial t}, \end{gather}
(2.2)\begin{gather} \quad \text{with}\ (\gamma - 1) \dfrac{\partial \dot{q}}{\partial t} = \beta\left[1+c_2\cos{(2(\theta-\varTheta_\beta))}\right]p - \kappa p^3, \end{gather}

where $p$ is the acoustic pressure; $\zeta$ is the acoustic damping; $c$ is the speed of sound; $r$ is the mean radius of the annulus; $\gamma$ is the heat capacity ratio; $\epsilon$ and $\varTheta _\epsilon$ are the amplitude and phase of the reactive symmetry, respectively (Indlekofer et al. Reference Indlekofer, Faure-Beaulieu, Dawson and Noiray2022); $\dot {q}$ is the coherent component of the fluctuations of the heat release rate (Noiray Reference Noiray2017), which is divided into a nonlinear cubic term, which models the saturation of the flame response weighted by the parameter $\kappa$; and a linear response to the acoustic perturbations, which is weighted by the heat release strength $\beta$, the resistive asymmetry intensity, $c_2$, and direction of maximum root-mean-square (r.m.s.) acoustic pressure, $\varTheta _\beta$. The flame response model (2.2) is an accurate approximation in the vicinity of a Hopf bifurcation (e.g. Lieuwen Reference Lieuwen2003; Noiray Reference Noiray2017). To further reduce the complexity of (2.2), we project the acoustic pressure on the degenerate pair of eigenmodes of the homogeneous wave equation (Noiray et al. Reference Noiray, Bothien and Schuermans2011; Noiray & Schuermans Reference Noiray and Schuermans2013b; Magri et al. Reference Magri, Schmid and Moeck2023)

(2.3)\begin{equation} p(t, \theta) = \eta_a(t) \cos\theta + \eta_b(t)\sin\theta, \end{equation}

where $\eta _a$ and $\eta _b$ are the acoustic velocity amplitudes. Substituting (2.3) into (2.1) and averaging yield the governing equations, which consist of a set of nonlinearly coupled oscillators

(2.4a)\begin{align} \ddot{\eta}_a &={-}\omega^2\left[\eta_a \left(1 + \frac{\epsilon}{2}\cos{(2 \varTheta_\epsilon)}\right) + \eta_b \frac{\epsilon}{2} \sin{(2\varTheta_\epsilon)}\right] \nonumber\\ &\quad +\dot{\eta}_a\left[2\nu+\frac{c_2\beta}{2}\cos{(2\varTheta_\beta)}-\frac{3\kappa}{4}(3\eta_a^2+\eta_b^2)\right]+ \dot{\eta}_b\left[\frac{c_2\beta}{2}\sin{(2\varTheta_\beta)}- \frac{3}{2}\kappa\eta_b\eta_a\right], \end{align}
(2.4b)\begin{align} \ddot{\eta}_b &={-}\omega^2\left[\eta_b \left(1 - \frac{\epsilon}{2}\cos{(2 \varTheta_\epsilon)}\right) + \eta_a \frac{\epsilon}{2} \sin{(2\varTheta_\epsilon)}\right] \nonumber\\ &\quad +\dot{\eta}_b\left[2\nu-\frac{c_2\beta}{2}\cos{(2\varTheta_\beta)}-\frac{3\kappa}{4}(3\eta_b^2+\eta_a^2)\right]+ \dot{\eta}_a\left[\frac{c_2\beta}{2}\sin{(2\varTheta_\beta)}- \frac{3}{2}\kappa\eta_b\eta_a\right], \end{align}

where $\nu =(\beta -\zeta )/2$ is the linear growth rate of the pressure amplitude in absence of asymmetries, i.e. when $c_2=0$ (Indlekofer et al. Reference Indlekofer, Faure-Beaulieu, Dawson and Noiray2022). Equation (2.4) can be written in a compact notation with a nonlinear state-space formulation

(2.5) \begin{equation} \left.\begin{array}{c@{}} {\rm d}{\boldsymbol{\mathit{\phi}}}= \mathcal{F}\left(\boldsymbol{\mathit{\phi}}, \boldsymbol{\alpha} \right) {\rm d}t, \\ {\boldsymbol{\mathit{q}}} = \mathcal{M}({\boldsymbol{\mathit{\theta}}}, {\boldsymbol{\mathit{\phi}}}), \end{array} \right\} \end{equation}

where ${\boldsymbol {\mathit {\phi }}}=[\eta _a; \dot {\eta }_a; \eta _b; \dot {\eta }_b]\in \mathbb {R}^{N_\phi }$ is the state vector; $\mathcal {F}:\mathbb {R}^{N_\phi }\rightarrow \mathbb {R}^{N_\phi }$ is the nonlinear operator that represents (2.4) and ${\boldsymbol {\mathit {\alpha }}}=[\nu ; c_2\beta ; \omega ; \kappa ; \epsilon ; \varTheta _\beta ; \varTheta _\epsilon ]\in \mathbb {R}^{N_\alpha }$ are the system's parameters (the operator $[~;~]$ indicates vertical concatenation and we use column vectors throughout the paper). The vector ${\boldsymbol {\mathit {q}}}$ describes the model observables, which are the acoustic pressures at the azimuthal locations ${\boldsymbol {\mathit {\theta }}}= \{0^\circ, 60^\circ, 120^\circ, 240^\circ \}$. The model observables are computed through the measurement operator $\mathcal {M}:\mathbb {R}^{N_\phi }\rightarrow \mathbb {R}^{N_q}$, which maps the state variables into the observable space through (2.3).

3. Real-time DA

Real-time DA makes qualitative models quantitatively accurate every time that sensors’ measurements (data) become available (Magri & Doan Reference Magri and Doan2020). At a measurement time $t_k$, the model's state, ${\boldsymbol {\mathit {\phi }}}_k$, and parameters, ${\boldsymbol {\mathit {\alpha }}}_k$, are inferred by combining two sources of information, that is, the measurement, ${\boldsymbol {\mathit {d}}}_k$, and the model forecast, ${\boldsymbol {\mathit {q}}}_k$. A robust inference process must also filter out both epistemic and aleatoric uncertainties (§§ 3.1, 3.2). The result of the assimilation is the analysis state (§§ 3.4, 3.3), which provides a statistically optimal estimate of the physical quantity that we wish to predict, which is the truth ${\boldsymbol {\mathit {d}}}^{t }_k$. As explained in § 2.1, the truth is unknown thus we define the post-processed acoustic pressure as the presumed ground truth (figure 2). We drop the time subscript $k$ unless it is necessary for clarity.

3.1. Aleatoric and epistemic uncertainties

First, we discuss the statistical hypotheses on the aleatoric uncertainties. The aleatoric uncertainties contaminate the state and parameters as

(3.1a,b)\begin{equation} {\boldsymbol{\mathit{\phi}}} + {\boldsymbol{\mathit{\epsilon}}}_\phi = {\boldsymbol{\mathit{\phi}}}^{t}, \quad {\boldsymbol{\mathit{\alpha}}} + {\boldsymbol{\mathit{\epsilon}}}_\alpha = {\boldsymbol{\mathit{\alpha}}}^{t}, \end{equation}

where ${t}$ indicates the true quantity (which is unknown). The aleatoric uncertainties are modelled as Gaussian distributions

(3.2a,b)\begin{equation} {\boldsymbol{\mathit{\epsilon}}}_\phi \sim \mathcal{N}({\boldsymbol{\mathit{0}}}, \boldsymbol{\mathsf{C}}_{\phi\phi} ), \quad {\boldsymbol{\mathit{\epsilon}}}_\alpha \sim \mathcal{N}({\boldsymbol{\mathit{0}}}, \boldsymbol{\mathsf{C}}_{\alpha\alpha} ), \end{equation}

where $\mathcal {N}({\boldsymbol {\mathit {0}}}, \boldsymbol{\mathsf{C}})$ is a normal distribution with zero mean and covariance $\boldsymbol{\mathsf{C}}$. Second, we discuss model biases, which are epistemic uncertainties (Nóvoa et al. Reference Nóvoa, Racca and Magri2024). The model bias may arise from modelling assumptions and approximations in the operators $\mathcal {F}$ and $\mathcal {M}$. The true bias of a model is defined as

(3.3)\begin{equation} {\boldsymbol{\mathit{b}}}^{t} = {\boldsymbol{\mathit{d}}}^{t} -\mathbb{E}({\boldsymbol{\mathit{q}}}), \end{equation}

which is the expected difference between the true observable and the model observable. (If the model is unbiased, ${\boldsymbol {\mathit {d}}}^{t} =\mathbb {E}({\boldsymbol {\mathit {q}}})$.) The true bias is unknown (colloquially, an ‘unknown unknown’ (Nóvoa et al. Reference Nóvoa, Racca and Magri2024)) because we do not have access to the ground truth. Therefore, we need to estimate it from the information available from the data. We refer to ${\boldsymbol {\mathit {b}}}^{\dagger} = {\boldsymbol {\mathit {d}}}^{{\dagger}}-\mathbb {E}({\boldsymbol {\mathit {q}}})$ as the presumed true bias, and to the bias estimated from a model as ${\boldsymbol {\mathit {b}}}$. With this, the model equations, which define the first source of information on the system, are

(3.4) \begin{equation} \left.\begin{array}{c@{}} {\rm d}{\boldsymbol{\mathit{\phi}}}= \mathcal{F}(\boldsymbol{\mathit{\phi}}+{\boldsymbol{\mathit{\epsilon}}}_\phi, \boldsymbol{\alpha}+{\boldsymbol{\mathit{\epsilon}}}_\alpha ) {\rm d}t, \\ {}{\boldsymbol{\mathit{y}}} = \mathcal{M}({\boldsymbol{\mathit{\theta}}}, {\boldsymbol{\mathit{\phi}}}) + {\boldsymbol{\mathit{b}}}+ {\boldsymbol{\mathit{\epsilon}}}_q, \end{array} \right\} \end{equation}

where ${\boldsymbol {\mathit {y}}}$ is the bias-corrected model observable and ${\boldsymbol {\mathit {\epsilon }}}_q \sim \mathcal {N}({\boldsymbol {\mathit {0}}}, \boldsymbol{\mathsf{C}}_{qq})$. The set of equations (3.4) is not closed until we define an estimator for the model bias (§ 4).

The second source of information on the system is the data measured by the sensors. Experimental data are affected by both aleatoric noise and measurement shifts (§ 2.1). The true measurement shift is

(3.5)\begin{equation} {\boldsymbol{\mathit{b}}}_d^{t} = \mathbb{E}({\boldsymbol{\mathit{d}}}) - {\boldsymbol{\mathit{d}}}^{t}. \end{equation}

Because the ground truth is not known, the best estimate on the model bias is the presumed true measurement shift ${\boldsymbol {\mathit {b}}}_d^{\dagger} = {\boldsymbol {\mathit {d}}}^{\dagger} -\mathbb {E}({\boldsymbol {\mathit {d}}})$ and the estimated measurement shift is ${\boldsymbol {\mathit {b}}}_d$. With this, we define the measurements at a time instant as

(3.6)\begin{equation} {\boldsymbol{\mathit{d}}} + {\boldsymbol{\mathit{b}}}_d + {\boldsymbol{\mathit{\epsilon}}}_d = {\boldsymbol{\mathit{d}}}^{t}. \end{equation}

For clarity, we include a summary of the terminology in table 1. In the problem under investigation, ${\boldsymbol {\mathit {d}}}$ is the raw acoustic data, ${\boldsymbol {\mathit {d}}}^{{\dagger} }$ is the post-processed data and ${\boldsymbol {\mathit {b}}}_d$ is the non-zero mean of the raw data (see § 2.1). The aleatoric noise affects the measurement as ${\boldsymbol {\mathit {\epsilon }}}_d \sim \mathcal {N}({\boldsymbol {\mathit {0}}}, \boldsymbol{\mathsf{C}}_{dd})$. For simplicity, we assume that the measurement errors are statistically independent, i.e. $\boldsymbol{\mathsf{C}}_{dd}$ is a diagonal matrix with identical diagonal entries $\sigma _d$.

Table 1. Summary of the terminology.

Both the model bias and measurement shift are unknown a priori. To infer them, we analyse the residuals between the forecast and the observations, which are also known as innovations (Dee & Todling Reference Dee and Todling2000; Haimberger Reference Haimberger2007), which are defined as

(3.7)\begin{equation} {\boldsymbol{\mathit{i}}} = {\boldsymbol{\mathit{d}}} - {\boldsymbol{\mathit{q}}} \quad\Rightarrow\quad \mathbb{E}({\boldsymbol{\mathit{i}}}) = {\boldsymbol{\mathit{b}}}_d + {\boldsymbol{\mathit{b}}}. \end{equation}

Therefore, the expected innovation is the sum of the measurement and model biases as defined in (3.3) and (3.5). The relation between the innovations ${\boldsymbol {\mathit {i}}}$ and the biases ${\boldsymbol {\mathit {b}}}$ and ${\boldsymbol {\mathit {b}}}_d$ will be essential for the design of the bias estimator in § 4.

3.2. Augmented state-space formulation

We define the augmented state vector ${\boldsymbol {\mathit {\psi }}}=[{\boldsymbol {\mathit {\phi }}}; {\boldsymbol {\mathit {\alpha }}};{\boldsymbol {\mathit {q}}}]$, which comprises the state variables, the thermoacoustic parameters and the model observables to formally have a linear measurement operator $\boldsymbol{\mathsf{M}}$, which simplifies the derivation of DA methods (e.g. Nóvoa & Magri Reference Nóvoa and Magri2022). The augmented form of the model (3.4) yields

(3.8) \begin{equation} \left\{\begin{array}{@{}rcl} {\rm d}\begin{bmatrix} {\boldsymbol{\mathit{\phi}}}\\ {\boldsymbol{\mathit{\alpha}}}\\ {\boldsymbol{\mathit{q}}} \end{bmatrix} & = & \begin{bmatrix} \mathcal{F}({\boldsymbol{\mathit{\phi}}}+{\boldsymbol{\mathit{\epsilon}}}_\phi,{\boldsymbol{\mathit{\alpha}}}+{\boldsymbol{\mathit{\epsilon}}}_\alpha)\\ {\boldsymbol{\mathit{0}}}_{N_\alpha}\\ {\boldsymbol{\mathit{0}}}_{N_q} \end{bmatrix} {\rm d}t\\ {\boldsymbol{\mathit{y}}} & = & {\boldsymbol{\mathit{q}}} + {\boldsymbol{\mathit{b}}} + {\boldsymbol{\mathit{\epsilon}}}_{q} \end{array} \right. \;\leftrightarrow \; \left\{ \begin{array}{@{}rcl} {\rm d}{\boldsymbol{\mathit{\psi}}} & = & \boldsymbol{\mathsf{F}}\left({\boldsymbol{\mathit{\psi}}} +{\boldsymbol{\mathit{\epsilon}}}_\psi\right){\rm d}t \\ {\boldsymbol{\mathit{y}}} & = & \boldsymbol{\mathsf{M}}{\boldsymbol{\mathit{\psi}}} + {\boldsymbol{\mathit{b}}} + {\boldsymbol{\mathit{\epsilon}}}_{q} \end{array} \right. , \quad\forall\,t \neq {t_d}, \end{equation}

where $t_d$ are the times when the assimilation is performed, $\boldsymbol{\mathsf{F}}({\boldsymbol {\mathit {\psi }}})$ and ${\boldsymbol {\mathit {\epsilon }}}_\psi$ are the augmented nonlinear operator and aleatoric uncertainties, respectively; $\boldsymbol{\mathsf{M}} = [\boldsymbol{\mathsf{0}}~|~\mathbb {I}_{N_q}]$ is the linear measurement operator, which consists of the vertical concatenation of a matrix of zeros, $\boldsymbol{\mathsf{0}}\in \mathbb {R}^{N_q\times (N_\phi +N_\alpha )}$, and the identity matrix, $\mathbb {I}_{N_q}\in \mathbb {R}^{N_q\times N_q}$; and ${\boldsymbol {\mathit {0}}}_{N_\alpha }$ and ${\boldsymbol {\mathit {0}}}_{N_q}$ are vectors of zeros (because the parameters are constant in time, and ${\boldsymbol {\mathit {q}}}$ is not integrated in time but it is only computed at the analysis step).

3.3. Stochastic ensemble framework

Under the Gaussian assumption, the inverse problem of finding the states, ${\boldsymbol {\mathit {\phi }}}$, and parameters, ${\boldsymbol {\mathit {\alpha }}}$, given some observations, ${\boldsymbol {\mathit {d}}}$, would be solved by the Kalman filter equations if the operator $\mathcal {F}$ were linear (Kalman Reference Kalman1960). Thermoacoustic oscillations, however, have nonlinear dynamics (see (2.1)). Stochastic ensemble methods are suitable for nonlinear systems because they do not require to propagate the covariance, in contrast to other sequential methods, e.g. the extended Kalman filter (Evensen Reference Evensen2009; Nóvoa & Magri Reference Nóvoa and Magri2022). Stochastic ensemble filters track in time $m$ realizations of the augmented state ${\boldsymbol {\mathit {\psi }}}_j$ to estimate the mean and covariance, respectively

(3.9a)$$\begin{gather} \mathbb{E}({\boldsymbol{\mathit{\psi}}})\approx\bar{{\boldsymbol{\mathit{\psi}}}}=\dfrac{1}{m}\sum^m_{j=1}{{\boldsymbol{\mathit{\psi}}}_j} \end{gather}$$
(3.9b)$$\begin{gather}\boldsymbol{\mathsf{C}}_{\psi\psi} = \begin{bmatrix} \boldsymbol{\mathsf{C}}_{\phi\phi} & \boldsymbol{\mathsf{C}}_{\phi\alpha} & \boldsymbol{\mathsf{C}}_{\phi q} \\ \boldsymbol{\mathsf{C}}_{\alpha \phi} & \boldsymbol{\mathsf{C}}_{\alpha\alpha} & \boldsymbol{\mathsf{C}}_{\alpha q} \\ \boldsymbol{\mathsf{C}}_{q\phi} & \boldsymbol{\mathsf{C}}_{q\alpha} & \boldsymbol{\mathsf{C}}_{qq} \\ \end{bmatrix} \approx\dfrac{1}{m-1}\sum^m_{j=1}({\boldsymbol{\mathit{\psi}}}_j-\bar{{\boldsymbol{\mathit{\psi}}}})\otimes({\boldsymbol{\mathit{\psi}}}_j-\bar{{\boldsymbol{\mathit{\psi}}}}), \end{gather}$$

where $\otimes$ is the dyadic product. Because the forecast operator $\mathcal {F}$ is nonlinear, the Gaussian prior may not remain Gaussian after the model forecast, and $\mathbb {E}(\mathcal {F}({\boldsymbol {\mathit {\psi }}}))\neq \mathcal {F}(\bar {{\boldsymbol {\mathit {\psi }}}})$. However, the time between analyses $\Delta t_d$ is assumed small enough such that the Gaussian distribution is not significantly distorted (Evensen Reference Evensen2009; Yu, Juniper & Magri Reference Yu, Juniper and Magri2019b).

Finally, we approximate the model bias as ${\boldsymbol {\mathit {b}}}\approx {\boldsymbol {\mathit {d}}}^{t} - \boldsymbol{\mathsf{M}}\bar {{\boldsymbol {\mathit {\psi }}}}$; thus, the sum of the biases is approximately equal to the mean of the innovations, i.e.

(3.10)\begin{equation} \bar{{\boldsymbol{\mathit{i}}}} = {\boldsymbol{\mathit{d}}} - \boldsymbol{\mathsf{M}}\bar{{\boldsymbol{\mathit{\psi}}}} \approx {\boldsymbol{\mathit{b}}}_d + {\boldsymbol{\mathit{b}}}. \end{equation}

3.4. The r-EnKF

The objective function in a bias-regularized ensemble DA framework contains three norms (Nóvoa et al. Reference Nóvoa, Racca and Magri2024)

(3.11)\begin{equation} \mathcal{J}({\boldsymbol{\mathit{\psi}}}_j) = \left\|{\boldsymbol{\mathit{\psi}}}_j-{\boldsymbol{\mathit{\psi}}}_j^{f}\right\|^2_{\boldsymbol{\mathsf{C}}^{{f}^{{-}1}}_{\psi\psi}} + \left\|{{\boldsymbol{\mathit{y}}}}_j-{\boldsymbol{\mathit{d}}}_j\right\|^2_{\boldsymbol{\mathsf{C}}^{{-}1}_{dd}}+ \gamma\left\|{\boldsymbol{\mathit{b}}}_j\right\|^2_{\boldsymbol{\mathsf{C}}^{{-}1}_{bb}} \quad \mathrm{for}\ \ j=0,\dots,m-1, \end{equation}

where the superscript ‘f’ indicates ‘forecast’, the operator $\|{\cdot }\|^2_{\boldsymbol{\mathsf{C}}^{-1}}$ is the $L_2$-norm weighted by the semi-positive definite matrix ${\boldsymbol{\mathsf{C}}^{-1}}$, $\gamma \geq 0$ is a user-defined bias regularization factor and ${\boldsymbol {\mathit {b}}}_j$ is the model bias of each ensemble member. For simplicity, we define the bias in the ensemble mean, i.e. ${\boldsymbol {\mathit {b}}}_j = {\boldsymbol {\mathit {b}}}$ for all $j$ (which we estimate with an ESN in § 4) (Nóvoa et al. Reference Nóvoa, Racca and Magri2024). From left to right, the norms on the right-hand side of (3.11) measure (1) the spread of the ensemble prediction, (2) the distance between the bias-corrected estimate and the observables and (3) the model bias. The analytical solution of the r-EnKF in (3.12), which globally minimizes the cost function (3.11) with respect to ${\boldsymbol {\mathit {\psi }}}_j$, is

(3.12a) \begin{equation} {\boldsymbol{\mathit{\psi}}}_j^{a} = {\boldsymbol{\mathit{\psi}}}_j^{f} + \boldsymbol{\mathsf{K}}_r [(\mathbb{I}+ \boldsymbol{\mathsf{J}})^{\mathrm{T}}({\boldsymbol{\mathit{d}}}_j + {\boldsymbol{\mathit{b}}}_d - {\boldsymbol{\mathit{y}}}_j^{f}) - \gamma \boldsymbol{\mathsf{C}}_{dd}\boldsymbol{\mathsf{C}}^{{-}1}_{bb}\boldsymbol{\mathsf{J}}^{\mathrm{T}}{\boldsymbol{\mathit{b}}}], \quad j=0,\dots,m-1, \end{equation}

with

(3.12b) \begin{equation} \boldsymbol{\mathsf{K}}_r = \boldsymbol{\mathsf{C}}_{\psi\psi}^{f}\boldsymbol{\mathsf{M}}^{\mathrm{T}}[\boldsymbol{\mathsf{C}}_{dd} + (\mathbb{I}+ \boldsymbol{\mathsf{J}})^{\mathrm{T}}(\mathbb{I}+ \boldsymbol{\mathsf{J}})\boldsymbol{\mathsf{M}}\boldsymbol{\mathsf{C}}_{\psi\psi}^{f}\boldsymbol{\mathsf{M}}^{\mathrm{T}} + \gamma \boldsymbol{\mathsf{C}}_{dd}\boldsymbol{\mathsf{C}}^{{-}1}_{bb}\boldsymbol{\mathsf{J}}^{\mathrm{T}}\boldsymbol{\mathsf{J}}\boldsymbol{\mathsf{M}}\boldsymbol{\mathsf{C}}_{\psi\psi}^{f}\boldsymbol{\mathsf{M}}^{\mathrm{T}}]^{{-}1}, \end{equation}

where ‘a’ stands for ‘analysis’, i.e.  the optimal state of the assimilation; we assimilate each ensemble member with a different ${\boldsymbol {\mathit {d}}}_j\sim \mathcal {N}({\boldsymbol {\mathit {d}}}, \boldsymbol{\mathsf{C}}_{dd})$ to avoid covariance underestimation in ensemble filters (Burgers, Jan van Leeuwen & Evensen Reference Burgers, Jan van Leeuwen and Evensen1998); $\boldsymbol{\mathsf{K}}_r$ is the regularized Kalman gain matrix and $\boldsymbol{\mathsf{J}}= {\mathrm {d}} {\boldsymbol {\mathit {b}}}/ {\mathrm {d}}\boldsymbol{\mathsf{M}}{\boldsymbol {\mathit {\psi }}}$ is the Jacobian of the bias estimator. Formulae (3.12) generalize the analytical solutions of Nóvoa et al. (Reference Nóvoa, Racca and Magri2024) to the case of a biased measurement. We prescribe $\boldsymbol{\mathsf{C}}_{dd} = \boldsymbol{\mathsf{C}}_{bb}$ because the model bias is defined in the observable space. We use $\gamma$ to tune the norm of the bias in (3.11) (Nóvoa et al. Reference Nóvoa, Racca and Magri2024). The optimal state and parameters are

(3.13) \begin{align} \begin{bmatrix} {\boldsymbol{\mathit{\phi}}}_j^{a}\\ {\boldsymbol{\mathit{\alpha}}}_j^{a} \end{bmatrix} &= \begin{bmatrix} {\boldsymbol{\mathit{\phi}}}_j^{f}\\ {\boldsymbol{\mathit{\alpha}}}_j^{f} \end{bmatrix} + \overbrace{ \begin{bmatrix} \boldsymbol{\mathsf{C}}_{\phi q}^{f}\\ \boldsymbol{\mathsf{C}}_{\alpha q}^{f} \end{bmatrix} \{\boldsymbol{\mathsf{C}}_{dd}+(\mathbb{I}+ \boldsymbol{\mathsf{J}})^{\mathrm{T}}(\mathbb{I}+ \boldsymbol{\mathsf{J}})\boldsymbol{\mathsf{C}}_{qq}^{f} +\gamma \boldsymbol{\mathsf{J}}^{\mathrm{T}}\boldsymbol{\mathsf{J}}\boldsymbol{\mathsf{C}}_{qq}^{f}\}^{{-}1} }^{\text{Regularized Kalman gain,}\, \boldsymbol{\mathsf{K}}_r} \ldots \nonumber\\ &\quad \dots\left[\left(\mathbb{I}+ \boldsymbol{\mathsf{J}}\right)^{\mathrm{T}}\overbrace{({\boldsymbol{\mathit{d}}}_j + {\boldsymbol{\mathit{b}}}_d - {\boldsymbol{\mathit{y}}}_j^{f})}^\text{Corrected innovation} - \gamma \boldsymbol{\mathsf{J}}^{\mathrm{T}}{\boldsymbol{\mathit{b}}}\right]. \end{align}

The r-EnKF defines a ‘good’ analysis from a biased model if the unbiased state ${\boldsymbol {\mathit {y}}}$ matches the truth, and the model bias ${\boldsymbol {\mathit {b}}}$ is small relative to the truth. The underlying assumptions of this work are that (i) our low-order model is qualitatively accurate such that the model bias ${\boldsymbol {\mathit {b}}}$ has a small norm, and (ii) the sensors are properly calibrated. In the limiting case when the assimilation framework is unbiased (i.e. ${\boldsymbol {\mathit {b}}}={\boldsymbol {\mathit {0}}}$ and ${\boldsymbol {\mathit {b}}}_d ={\boldsymbol {\mathit {0}}}$), the r-EnKF (3.13) becomes the bias-unregularized EnKF (Appendix A).

4. Reservoir computing for inferring model bias and measurement shift

To apply the r-EnKF (3.13), we must provide an estimate of the model bias and the measurement shift. We employ an ESN for this task. Echo state networks are suitable for real-time DA because (i) they are recurrent neural networks, i.e. they are designed to learn temporal dynamics in data; (ii) they are based on reservoir computing, hence, they are universal approximators (Grigoryeva & Ortega Reference Grigoryeva and Ortega2018); (iii) they are general nonlinear auto-regressive models (Aggarwal Reference Aggarwal2018); and (iv) training an ESN consists of solving a linear regression problem, which provides a global minimum without back propagation. In this work, we generalize the implementation of Nóvoa et al. (Reference Nóvoa, Racca and Magri2024) to account for both model bias and measurement shift (§ 4.1). We propose a training dataset generation based on time-series correlation (§ 4.2).

4.1. Echo state network architecture and state-space formulation

We propose an ESN that simultaneously estimates ${\boldsymbol {\mathit {b}}}$ and ${\boldsymbol {\mathit {b}}}_d$. Importantly, ${\boldsymbol {\mathit {b}}}$ and ${\boldsymbol {\mathit {b}}}_d$ are not directly measurable in real time, but we have access to the innovation, which is linked to the biases through (3.10). Once we have information on the innovation and the model bias (the measurement shift), we can estimate the measurement shift (the model bias) through (3.10).

Figure 4(a) represents pictorially the proposed ESN at time $t_k$, with its three main components: (i) the input data, which are the mean analysis innovations, i.e. $\bar {{\boldsymbol {\mathit {i}}}}_k$; (ii) the reservoir, which is a high-dimensional state characterized by the sparse reservoir matrix $\boldsymbol{\mathsf{W}}\in \mathbb {R}^{N_{r}\times N_{r}}$ and vector ${\boldsymbol {\mathit {r}}}_k\in \mathbb {R}^{N_{r}}$ ($N_r\gg N_q$ is the number of neurons in the reservoir states); and (iii) the outputs, which are the mean innovation and the model bias at the next time step, i.e. $\bar {{\boldsymbol {\mathit {i}}}}_{k+1}$ and ${\boldsymbol {\mathit {b}}}_{k+1}$. The inputs to the network are a subset of the output (i.e. the innovation only). The sparse input matrix $\boldsymbol{\mathsf{W}}_{in}\in \mathbb {R}^{N_{r}\times (N_q+1)}$ maps the physical state into the reservoir, and the output matrix $\boldsymbol{\mathsf{W}}_{out}\in \mathbb {R}^{ N_q\times (N_r+1)}$ maps the reservoir state back to the physical state. Furthermore, figure 4(a) shows the two forecast settings of the ESN: open loop, which is performed when data on the innovations are available, and the closed loop, in which the ESN runs autonomously using the output as the input in the next time step. Figure 4(b) shows the unfolded architecture starting at time $t_k$ when observations of the innovations are available (i.e. at the analysis step). At this time, the reservoir is re-initialized with the analysis innovations such that the input to the ESN is $\bar {{\boldsymbol {\mathit {i}}}}^{a}_{k}={\boldsymbol {\mathit {d}}}_k-\boldsymbol{\mathsf{M}}\bar {{\boldsymbol {\mathit {\psi }}}}_k^{a}$. From this, the ESN estimates at the next time step $t_{k+1}$ the model bias and the innovation $\bar {{\boldsymbol {\mathit {i}}}}_{k+1}$, which is used as an initial condition in the subsequent forecast step. Mathematically, the equations that forecast the model bias and mean innovation in time are

(4.1)\begin{equation} \left.\begin{gathered} {}[{\boldsymbol{\mathit{b}}}_{k+1}; \bar{{\boldsymbol{\mathit{i}}}}_{k+1}] = \boldsymbol{\mathsf{W}}_{out}\left[{\boldsymbol{\mathit{r}}}_{k+1}; 1\right],\\ {\boldsymbol{\mathit{r}}}_{k+1} = \textrm{tanh}\left(\sigma_{in}\boldsymbol{\mathsf{W}}_{in}\left[\bar{{\boldsymbol{\mathit{i}}}}^\star_k\odot {\boldsymbol{\mathit{g}}}; \delta_r\right]+ \rho\boldsymbol{\mathsf{W}}{\boldsymbol{\mathit{r}}}_{k}\right), \end{gathered}\right\} \end{equation}

where $\bar {{\boldsymbol {\mathit {i}}}}^\star =\bar {{\boldsymbol {\mathit {i}}}}^{a}$ in open loop and $\bar {{\boldsymbol {\mathit {i}}}}^\star = \bar {{\boldsymbol {\mathit {i}}}}$ in closed loop; the ${\tanh ({\cdot })}$ operation is performed element-wise; the operator $\odot$ is the Hadamard product, i.e. component-wise multiplication; and ${\boldsymbol {\mathit {g}}}=[g_1;\dots ; g_{2N_q}]$ is the input normalizing term with $g_q^{-1} = \max {({\boldsymbol {\mathit {u}}}_q)}-\min {({\boldsymbol {\mathit {u}}}_q)}$, i.e. $g_q^{-1}$ is the range of the $q\mathrm {th}$ component of the training data $\boldsymbol{\mathsf{U}}$; $\delta _r$ is a constant used for breaking the symmetry of the ESN (we set $\delta _r=0.1$) (Huhn & Magri Reference Huhn and Magri2020). We define $\boldsymbol{\mathsf{W}}_{in}$ and $\boldsymbol{\mathsf{W}}$ as sparse and randomly generated, with a connectivity of 3 neurons (see Jaeger & Haas (Reference Jaeger and Haas2004) for details). We compute the matrix $\boldsymbol{\mathsf{W}}_{out}$ during the training.

Figure 4. Schematic representation of the proposed ESN architecture for model bias and innovation prediction. (a) Compact architecture showing the open-loop and closed-loop configurations; and (b) unfolded architecture starting at an analysis step is at $t_k$, in which there is one open-loop step followed by a closed-loop forecast. In open loop the input to the ESN is the analysis mean innovation $\bar {{\boldsymbol {\mathit {i}}}}^{a}_{k}={\boldsymbol {\mathit {d}}}_k-\boldsymbol{\mathsf{M}}\bar {{\boldsymbol {\mathit {\psi }}}}_k^{a}$, i.e. the difference between the raw acoustic pressure data and the analysis model estimate. The outputs from the ESN are (i) the innovation at the next time step $\bar {{\boldsymbol {\mathit {i}}}}_{k+1}$ (which becomes the input to the next time step in the closed-loop configuration); and (ii) the model bias ${\boldsymbol {\mathit {b}}}_{k+1}$, i.e. an estimate of the difference between the presumed acoustic state ${\boldsymbol {\mathit {d}}}_{k+1}^{\dagger}$ and the model estimate $\boldsymbol{\mathsf{M}}\bar{\boldsymbol{\psi}}_{k+1} $.

Lastly, the r-EnKF equations (3.12) require the definition of the Jacobian of the bias estimator at the analysis step. Because at the analysis step the ESN inputs the analysis innovations with an open-loop step (see figure 4b), the Jacobian of the bias estimator is equivalent to the negative Jacobian of the ESN in open-loop configuration (see Appendix B).

4.2. Training the network

During training, the ESN is in an open-loop configuration (figure 4a). The inputs to the reservoir are the training dataset $\boldsymbol{\mathsf{U}}=[{\boldsymbol {\mathit {u}}}_0\,|\dots |\,{\boldsymbol {\mathit {u}}}_{N_{tr}-1}]$, in which each time component ${\boldsymbol {\mathit {u}}}_k = [{\boldsymbol {\mathit {b}}}_{u}(t_k); {{\boldsymbol {\mathit {i}}}}_{u}(t_k)]$, the subscript ‘$u$’ indicates training data, and the operator $[~|~]$ indicates horizontal concatenation. Although we have information from the experimental data to train the network, the optimal parameters of the thermoacoustic system are unknown. Thus, we do not know a priori the model bias and measurement shift. Selecting an appropriate training dataset is key to obtaining a robust ESN, which can estimate the model bias and innovations. We create a set of $L$ guesses on the bias and innovations from a single realization of the experimental data, which means that the ESN is not trained with the ‘true’ bias (which is an unknown quantity in real time). The training data are generated from the experimental data as detailed in Appendix C. In summary, the procedure is as follows.

  1. (i) Take measurements for a training time window $t_{tr}$ of acoustic pressure data $\boldsymbol{\mathsf{D}}_{u}$ and estimate $\boldsymbol{\mathsf{D}}^{{\dagger} }_{u}$ by applying a Butterworth filter to the raw data $\boldsymbol{\mathsf{D}}_{u}$ (see § 2).

  2. (ii) Generate $L$ model estimates $\boldsymbol{\mathsf{Q}}_{{u}, l}$ of the acoustic pressure from (2.4). Each $\boldsymbol{\mathsf{Q}}_{{u}, l}$ has a different set of parameters, which are uniformly randomly generated (the parameters’ ranges are reported in Appendix D).

  3. (iii) Correlate in time each $\boldsymbol{\mathsf{Q}}_{{u}, l}$ with $\boldsymbol{\mathsf{D}}_{u}$.

  4. (iv) Compute the model bias and innovations as $\boldsymbol{\mathsf{B}}_{{u},l}= \boldsymbol{\mathsf{D}}^{{\dagger} }_{u} - \boldsymbol{\mathsf{Q}}_{{u},l}$ and $\boldsymbol{\mathsf{I}}_{{u},l} = \boldsymbol{\mathsf{D}}_{u} - \boldsymbol{\mathsf{Q}}_{{u},l}.$

Finally, we apply data augmentation to improve the network's adaptivity in a real-time assimilation framework. The total number of training time series in the proposed training method is $2L$. Training the ESN consists of finding the elements in $\boldsymbol{\mathsf{W}}_{out}$, which minimize the distance between the outputs obtained from an open-loop forecast step of the training data (the features) to the training data at the following time step (the labels). This minimization is solved by ridge regression of the linear system (Lukoševičius Reference Lukoševičius2012)

(4.2)\begin{equation} \left(\sum_{l = 0}^{2L-1}\boldsymbol{\mathsf{R}}^{\,}_l\boldsymbol{\mathsf{R}}_l^{\mathrm{T}} + \lambda\, \mathbb{I}_{N_r+1}\right)\boldsymbol{\mathsf{W}}_{out}^{\mathrm{T}} = \sum_{l=0}^{2L-1}\boldsymbol{\mathsf{R}}_l^{\,}\boldsymbol{\mathsf{U}}_l^{\mathrm{T}}, \end{equation}

where $\lambda$ is the Tikhonov regularization parameter and $\boldsymbol{\mathsf{R}} = [[{\boldsymbol {\mathit {r}}}_0; 1]\,|\dots |\,[{\boldsymbol {\mathit {r}}}_{N_{tr}-1}; 1]]$, with ${\boldsymbol {\mathit {r}}}_0={\boldsymbol {\mathit {0}}}$ and ${\boldsymbol {\mathit {r}}}_k$ are obtained with (4.1) using the innovations of the training set as inputs. The summations over $2L$ can be performed in parallel to minimize computational costs. Finally, we tune the hyperparameters of the ESN, i.e. the spectral radius $\rho$, the input scaling $\sigma _{in}$ and the Tikhonov regularization parameter $\lambda$. We use a recycle validation strategy with Bayesian optimization for the hyperparameter selection (see Racca & Magri Reference Racca and Magri2021).

5. Implementation

To summarize, the following four stages are necessary to design the real-time digital twin.

(1) Initialization: the ensemble ${\boldsymbol {\mathit {\psi }}}_j\ \mathrm {for}\ j = 1,\ldots,m$ and the ESN are initialized using the parameters reported in Appendix D. (2) Forecast: time march in parallel each ensemble member (3.8), i.e. the system of thermoacoustic equations and parameters, and the ESN according to (4.1) until observation data become available. (3) Analysis: apply the r-EnKF (3.12), which obtains the optimal combination between the unbiased model estimate and the observation data. (4) Re-initialization: update the state and parameters of the ensemble with the analyses (i.e. ${\boldsymbol {\mathit {\phi }}}^{a}$ and ${\boldsymbol {\mathit {\alpha }}}^{a}$), and the ESN with the analysis innovation (i.e. $\bar {{\boldsymbol {\mathit {i}}}}^{a} = {\boldsymbol {\mathit {d}}} - \boldsymbol{\mathsf{M}}\bar {{\boldsymbol {\mathit {\psi }}}}^{a}$).

Steps (2)–(4) are repeated sequentially as data become available. Once the assimilation process has ended, we forecast further the ensemble and the ESN to analyse the extrapolation and generalization capability of the estimated state, parameters and biases.

6. Performance of the real-time digital twin

The real-time digital twin has four components: (i) data from sensors (§ 2.1), (ii) the physics-based low-order model (§ 2.2), (iii) the ESN to infer the model bias and measurement shift (§ 4), and (iv) the real-time and bias-regularized DA method (§ 3.4) to twin sensors’ data with the low-order model to infer the state and model parameters. We investigate the capability of the proposed real-time digital twin to predict autonomously the dynamics of azimuthal thermoacoustic oscillations. We analyse the digital twin's performance with respect to established methods: the bias-unregularized EnKF (Nóvoa & Magri Reference Nóvoa and Magri2022), which is suitable for real-time assimilation, and Langevin-based regression (Indlekofer et al. Reference Indlekofer, Faure-Beaulieu, Dawson and Noiray2022), which is suitable for offline assimilation. The hyperparameters used to train the ESN and the assimilation parameters are reported in Appendix D.

6.1. Time-accurate prediction

We focus our analysis on the equivalence ratio $\varPhi =0.5125$, which is a thermoacoustically unstable system. Figure 5 shows the time evolution of the acoustic pressure at $\theta =60^\circ$. Prior to the assimilation, we train the ESN with training data over $t_{tr}=0.167$ s. We start the assimilation at $0.5\ {\rm s}$ to avoid using training data during the assimilation and to have the ensemble in the statistically stationary regime. In parallel, we initialize the ESN before the assimilation begins with 10 data points (i.e. the washout). After this, observations from the raw experimental data (red dots) are assimilated every $\Delta t_d=5.86\times 10^{-4}$ s, which corresponds to approximately $0.64$ data points per acoustic period. (The assimilation frequency was selected by down-sampling the raw data, which is recorded at 51.2 kHz, such that $\Delta t_d$ fulfils to the Shannon–Nyquist criterion.) At a time instant, the measurement (the data point, red dot) is assimilated into the model, which adaptively updates itself through the r-EnKF and bias estimators (§§ 3.4, 4). After that, the state and model parameters have been updated, the low-order model (2.5) evolves autonomously until the next data point becomes available. This process mimics a stream of data coming from sensors on the fly. We assimilate 600 measurements during the assimilation window of 0.35 s, the model runs autonomously without seeing more data ($t>0.85$ s).

Figure 5. Real-time digital twin at $\varPhi =0.5125$. Time evolution of the thermoacoustic pressure (in Pa) at $\theta =60^\circ$ using (a) the bias-unregularized EnKF and (b) the proposed r-EnKF. Comparison between the presumed ground truth (thick grey line), the raw data (thick red line), the prediction from the ensemble filter (cyan lines) and in (b) the bias-corrected mean estimate (navy dashed line). The close ups show the start and end of the assimilation window, which is indicated by the vertical dashed lines. The observations are show in red circles.

First, we analyse the performance of the bias-unregularized filter (EnKF) on state estimation (figure 5a) with the corresponding parameter inference shown in figure 6 (dashed lines). Although the system has a self-sustained oscillation, the bias-unregularized filter converges towards an incorrect solution, that is, a fixed point. On the other hand, the bias-regularized filter (r-EnKF) successfully learns the thermoacoustic model. Second, the r-EnKF filters out aleatoric noise and turbulent fluctuations. These fluctuations are the difference between the amplitudes of the raw data, which are the input to the digital twin, and the post-processed data, which are the hidden state that we wish to uncover and predict (the presumed acoustic state), of up to approximately 200 Pa (close-ups at the end of assimilation in figure 5). Third, after assimilation, the r-EnKF predicts the thermoacoustic limit cycle beyond the assimilation window. This means that the digital twin has learnt an accurate model of the system, which generalizes beyond the assimilation. The ensemble model observables ${\boldsymbol {\mathit {q}}}_j$ and bias-corrected ensemble mean $\bar {{\boldsymbol {\mathit {y}}}} = \bar {{\boldsymbol {\mathit {q}}}} + {\boldsymbol {\mathit {b}}}$ are almost identical at convergence in figure 5(b), which means that the magnitude of the model bias is small (approximately 20 Pa in amplitude; see figure 8). Fourth, the r-EnKF infers the optimal system's thermoacoustic parameters, which are seven in this model (figure 6). The ensemble parameters at $t=0$ in figure 6 are initialized on physical ranges from the physical parameters used in the literature, which are computed by Langevin regression in Indlekofer et al. (Reference Indlekofer, Faure-Beaulieu, Dawson and Noiray2022). From inspection of the optimal parameters, we can draw physical conclusions: (i) the parameters of the literature are not the optimal parameters of the model; (ii) the linear growth rate, $\nu$, angular frequency, $\omega$, the heat release strength weighted by the symmetry intensity, $c_2\beta$, and the phase of the reactive symmetry, $\varTheta _\epsilon$, do not significantly vary at regime, which means that these parameters are constants and do not depend on the state; and (iii) the nonlinear parameter, $\kappa$, the amplitude of the reactive symmetry, $\epsilon$, and the direction of asymmetry, $\varTheta _\beta$ have temporal variations that follow the modulation of the pressure signal envelope (figure 6). Physically, this means that the optimal deterministic system that represents azimuthal instabilities is a time-varying parameter system (Durbin & Koopman Reference Durbin and Koopman2012). By inferring time-varying parameters, we derive a deterministic system that does not require stochastic modelling. We further analyse the thermoacoustic parameters for all the equivalence ratios in figures 10 and 11.

Figure 6. Thermoacoustic parameters with the bias-unregularized EnKF (dashed) and the r-EnKF (solid). The lines and shaded areas indicate the ensemble mean and standard deviation, respectively. Here $\varPhi =0.5125$.

To summarize, the presence of model bias and measurement shift makes the established bias-unregularized EnKF fail to uncover and predict the physical state from noisy data. In contrast, the bias-regularized filter (r-EnKF) infers the state, parameters, model bias and measurement shift, which enable a time accurate and physical prediction beyond the assimilation window.

6.2. Statistics and biases

We analyse the uncertainty and the statistics of the time series (§ 6.1) generated by the real-time digital twin. To do so, we define the normalized r.m.s. error of two time series ${\boldsymbol {\mathit {w}}}, {\boldsymbol {\mathit {z}}}$ with $N_q$ dimensions and $N_t$ time steps as

(6.1)\begin{equation} \mathrm{R.m.s.}({\boldsymbol{\mathit{w}}}, {\boldsymbol{\mathit{z}}}) = \sqrt{\dfrac{\sum_{q}\sum_{k} \left({w}_q(t_k)- {z}_q(t_k)\right)^2}{\sum_{q} \sum_{k}\left({w}_q(t_k)\right)^2}}, \quad \mathrm{for} \ \begin{array}{l} q=0,\dots, N_q-1, \\ k=0,\dots,N_t-1. \end{array} \end{equation}

Figure 7 shows the r.m.s. errors at four critical time instants in the assimilation process: before the data are assimilated (first column), which corresponds to the initialization of the digital twin; at the start and end of the DA window (second and third columns, respectively); and after the DA (fourth column), i.e. when the digital twin evolves autonomously to predict unseen dynamics (generalization). The inference of the model bias and measurement shift is key to obtaining a small generalization error. Although the bias-unregularized EnKF converges to an unphysical solution with a large r.m.s., the bias-regularized filter converges to a physical solution with a small r.m.s. As the assimilation progresses, the ESN improves the prediction of the model bias because the r.m.s. of the bias-corrected prediction (figure 7b, cyan) is smaller than the model estimate (figure 7b, navy). This is further evidenced by analysing the histograms of the time series for the four azimuthal locations (figure 8). The digital twin (bias-corrected solution, navy histograms) converges to the expected distribution of the acoustic pressure (presumed truth) (black histograms) despite the fact that the assimilated data are substantially contaminated by noise and turbulent fluctuations (red histograms). The bias-corrected histograms have a zero mean, which means that the ESN in the digital twin has correctly inferred both the model bias and the measurement shift. Both biases are shown in figure 9. The measurement shift, whose presumed true value is known a priori ${\boldsymbol {\mathit {b}}}_d^{\dagger} = \langle {\boldsymbol {\mathit {d}}}-{\boldsymbol {\mathit {d}}}^{\dagger} \rangle$ (where the brackets $\langle ~\rangle$ indicate time average), is exactly inferred by the ESN. On the other hand, the presumed true value of the model bias is not known a priori, but it is presumed as ${\boldsymbol {\mathit {b}}}^{\dagger} = {\boldsymbol {\mathit {d}}}^{\dagger} - {\boldsymbol {\mathit {q}}}^{\dagger}$, where ${\boldsymbol {\mathit {q}}}^{\dagger}$ is the presumed true model estimate, which is not known a priori. The digital twin improves the knowledge that we have on the model bias by correcting the presumed model bias.

Figure 7. Normalized r.m.s. errors between the presumed ground truth and the model prediction (cyan) and between the presumed ground truth and the bias-corrected prediction (navy) with (a) the bias-unregularized EnKF and (b) the r-EnKF. The filled histograms show the r.m.s. of each ensemble member, the thick lines show the r.m.s. of the ensemble mean and the vertical dashed line indicates the mean of the r.m.s. error. The errors are computed at four stages of the assimilation: $t\in [0.49, 0.50]$ (before DA), $t\in [0.50, 0.51]$ (start of DA), $t\in [0.84, 0.85]$ (end of DA) and $t\in [0.85, 0.86]$ (after DA). Here $\varPhi =0.5125$.

Figure 8. Histograms of the acoustic pressure after assimilation with (a) the bias-unregularized EnKF and (b) the r-EnKF, at the four observed azimuthal locations. Comparison between the presumed ground truth (black), the observations from raw data (red), the ensemble prediction (filled cyan) and its mean (dashed teal), and bias-corrected ensemble predictions (filled navy) and its mean (dashed light blue). The vertical lines indicate the mean of each of the distributions. Here $\varPhi =0.5125$.

Figure 9. Model bias and measurement shift estimates after assimilation with the r-EnKF. Comparison between the ESN inference of (a) the model bias (dashed black) and the presumed model bias (thick grey); and (b) the measurement shift estimate (dashed black), difference between the raw and post-processed data (thin grey), and its time average, $\langle {\cdot }\rangle$ (thick grey). Here $\varPhi =0.5125$.

To summarize, the r-EnKF generalizes to unseen scenarios and extrapolates correctly in time. Key to the robust performance is the inference of the biases in the model and measurements.

6.3. Physical parameters and comparison with the literature

We analyse the system's physical parameters (§ 2.2 and (2.4)) for all the available equivalence ratios. We train an ESN for each equivalence ratio. The DA parameters (assimilation window, frequency, ensemble size, etc.) are the same in all tests (Appendix D). We compare the low-order model parameters (i) inferred by the bias-unregularized EnKF, (ii) inferred by the r-EnKF, and (iii) obtained with the state-of-the-art Langevin regression by Indlekofer et al. (Reference Indlekofer, Faure-Beaulieu, Dawson and Noiray2022). The linear growth rate, $\nu$, and the heat source strength weighted by the symmetry intensity, $c_2\beta$ are shown in figure 10; and the remaining parameters are shown in figure 11. The specific value parameters are listed in Appendix E.

Figure 10. Comparison of the estimated values of the two stability parameters $c_2\beta$ (circles) and $\nu$ (triangles) after DA, obtained with the bias-unregularized EnKF (black) and the r-EnKF with different bias regularization parameters (colour maps). Lighter colours indicate stronger regularization; larger marker sizes indicate more accurate solutions, i.e. smaller r.m.s. errors; and the error bars represent the ensemble standard deviation. The dashed lines corresponds to the parameters identified by offline Langevin regression (Indlekofer et al. Reference Indlekofer, Faure-Beaulieu, Dawson and Noiray2022).

Figure 11. Same as figure 10 for the remaining physical parameters.

First, the r-EnKF infers physical parameters with a small generalization error (the r.m.s. errors are small, depicted with large markers). Large values of the bias regularization, i.e. $\gamma \gtrsim 1$, provide the most accurate set of parameters. This physically means that the low-order model is a qualitatively accurate model because the bias has a small norm (which corresponds to a large $\gamma$). The bias-unregularized EnKF is outperformed in all cases. (The case with $\varPhi =0.4875$ is a stable configuration with a trivial fixed point, which is why the parameters have a larger r.m.s.) Second, we analyse the uncertainty of the parameters, which is shown by an error bar. The height of the error bar is the ensemble standard deviation. The linear growth rate, $\nu$, and the angular frequency, $\omega$, are inferred with small uncertainties. Physically, this means that the model predictions are markedly sensitive to small changes in the linear growth rate and angular frequency. The prediction's sensitivity to the parameters change with the equivalence ratio. Physically, the large sensitivity of the linear growth rate (i.e. the system's linear stability) to the operating condition is a characteristic feature of thermoacoustic instabilities, in particular, in annular combustors, in which eigenvalues are degenerate (e.g. Magri et al. Reference Magri, Schmid and Moeck2023). Third, there exists multiple combinations of parameters that provide a similar accuracy. For example, in the case $\varPhi =0.5125$, there are different combinations of $\varTheta _\beta, \epsilon, c_2\beta$, which yield a small r.m.s. Fourth, we compare the digital twin with the offline Langevin regression of Indlekofer et al. (Reference Indlekofer, Faure-Beaulieu, Dawson and Noiray2022) (horizontal dashed lines in figures 10 and 11). The inferred parameters are physically meaningful because they lie within value ranges that are similar to the parameters of the literature. The digital twin simultaneously infers all the parameters in real time, which overcomes the limitations of Langevin regression, which needs to be performed on one parameter at a time and offline. The parameters presented in figures 10 and 11 depend linearly on $\varPhi$. These linear dependencies are approximations of the ground truth and were inferred from the identified set of parameters at each $\varPhi$ (see figure 9 in Indlekofer et al. Reference Indlekofer, Faure-Beaulieu, Dawson and Noiray2022). When tuned for a specific $\varPhi$, the offline parameters lead to state statistics that more closely match the observables. The digital twin parameters are optimal because they are the minimizers of (3.11). The long-term statistics of the case $\varPhi =0.5625$, which lives on a generalized Bloch sphere (Magri et al. Reference Magri, Schmid and Moeck2023), is further analysed with the quaternion ansätz of Ghirardo & Bothien (Reference Ghirardo and Bothien2018). Figure 12 shows the acoustic state for $\varPhi =0.5625$ from the raw data (panel b), and the proposed digital twin (panel c). The long-term statistics are obtained by forecasting the model (without assimilation) using the identified parameters (reported in Appendix E). The real-time digital twin infers a set of physical parameters, which correctly capture the physical state of the azimuthal acoustic mode. The nature angle has a bimodal distribution with $|\chi |<{\rm \pi} /4$, which means that the thermoacoustic instability is a mixed mode.

Figure 12. Acoustic state for $\varPhi =0.5625$. (a) Bloch sphere for quaternion representation of the acoustic pressure $p(\theta, t) = A \cos {(\vartheta - \theta )} \cos {\chi } \cos {(\omega t + \varphi )} + A \sin {(\vartheta - \theta )} \sin {\chi } \sin {(\omega t + \varphi )}$, where $A$ is the slow-varying amplitude, i.e. the envelope of the acoustic pressure, $\varphi$ is the slow temporal phase drift, $\vartheta$ is the position of the anti-nodal line and $\chi$ is the nature angle (Ghirardo & Bothien Reference Ghirardo and Bothien2018; Magri et al. Reference Magri, Schmid and Moeck2023). (b) State from the raw data. (c) State predicted by the proposed real-time digital twin.

6.4. Generalizability

In this section we propose a method for generalizing the digital twin to unseen experimental data, e.g. a different equivalence ratio. In §§ 6.16.3 we train a different ESN for each equivalence ratio. Here, we test the real-time digital twin framework using an unified ESN, i.e. we train the ESN with data for $\varPhi ={0.4875, 0.5125, 0.5375}$ but not for the $\varPhi _{test}=0.5625$ case. The training data generation is identical to that detailed in Appendix C, but we combine the training data for $\varPhi ={0.4875, 0.5125, 0.5375}$ to train one ESN, which has the same characteristics as those used in the previous analyses (Appendix D). We increased the ensemble size to $m=40$ to account for the larger uncertainty and variability in the dynamics. Figure 13 shows the inferred linear growth rate $\nu$ and the heat source strength $c_2\beta$ for all $\varPhi$. Notably, the digital twin for the equivalence ratio $\varPhi _{test}=0.5625$, which was unseen by the ESN, converges to similar parameters to those in figure 10. We further analyse the test case $\varPhi _{test}=0.5625$ in figure 14, which shows the histograms of the acoustic pressure after assimilation. The ESN infers the correct measurement shift to give a zero mean prediction of the pressure, and a model bias that reduces the distance between the ensemble state estimate and the presumed truth. Overall, we conclude that the unified ESN successfully generalizes to data with unseen dynamics.

Figure 13. Generalizability study. Estimated values of the two stability parameters $c_2\beta$ (circles) and $\nu$ (triangles) after DA with the r-EnKF using the same ESN, which is not trained with data for $\varPhi =0.5625$. Lighter colours indicate a stronger bias regularization parameter; larger marker sizes indicate more accurate solutions, i.e. smaller r.m.s. errors; and the error bars represent the ensemble standard deviation. The dashed lines correspond to the parameters identified by offline Langevin regression (Indlekofer et al. Reference Indlekofer, Faure-Beaulieu, Dawson and Noiray2022).

Figure 14. Generalizability study. Histograms of the acoustic pressure at $\varPhi =0.5625$ after assimilation with the r-EnKF, at the four observed azimuthal locations. The ESN used in the digital twin is not trained with data for $\varPhi =0.5625$. Comparison between the presumed ground truth (black), the observations from raw data (red), the ensemble prediction (filled cyan) and its mean (dashed teal), and bias-corrected ensemble predictions (filled navy) and its mean (dashed light blue). The vertical lines indicate the mean of each of the distributions.

7. Conclusions

We develop a real-time digital twin of azimuthal thermoacoustic oscillations of a hydrogen-based combustor for uncovering the physical state from raw data, and predicting the nonlinear dynamics in time. We identify the four ingredients required to design a robust real-time digital twin as follows.

  1. (i) In a real-time context, we need to work with raw data as they come from sensors, therefore, we do not pre-process the data. The data we utilize is raw (the data contain both environmental and instrumental noise, and turbulent fluctuations) and sparse (the data are collected by four microphones).

  2. (ii) A low-order model, which is deterministic, qualitatively accurate, computationally cheap and physical.

  3. (iii) An estimator of both the model bias, which originates from the modelling assumptions and approximation of a low-order model, and measurement shift, which originates from microphones recording the total pressure (which has a non-zero mean) instead of the acoustic pressure (which has zero mean).

  4. (iv) A statistical DA method that optimally combines the two sources of information on the system (i)–(ii) to improve the prediction on the physical states by updating the physical parameters every time that sensors’ data become available (on the fly or in real time).

We propose a real-time DA framework to infer the acoustic pressure, physical parameters, model bias and measurements shift simultaneously, which is the r-EnKF. We find an analytical solution that minimizes the bias-regularized DA cost function (3.11). We propose a reservoir computer (an ESN) and a training strategy to infer both the model bias and measurement shift, without making assumptions on their functional forms a priori. The real-time digital twin is applied to a laboratory hydrogen-based annular combustor for a variety of equivalence ratios. The data are treated as if they came from sensors on the fly, i.e. the pressure measurements are assimilated at a time step and then disregarded until the next pressure measurement becomes available. First, after assimilation of raw data (at 1707 Hz, i.e. approximately 0.6 data points per acoustic period), the model learns the correct physical state and optimal parameters. The real-time digital twin autonomously predicts azimuthal dynamics beyond the assimilation window, i.e. without seeing more data. This is in stark contrast with state-of-the-art methods based on the bias-unregularized methods, which do not perform well in the presence of model bias and/or measurement shift. Second, the digital twin uncovers the physical acoustic pressure from the raw data. Because the physical mechanisms are constrained in the low-order model, which originates from conservation laws, the digital twin acts as a physics-based filter, which removes aleatoric noise and turbulent fluctuations. Third, physically we find that azimuthal oscillations are governed by a time-varying parameter system, which generalizes existing models that have constant parameters and capture only slow-varying variables. Fourth, we find that the key parameters that influence the dynamics are the linear growth rate and the angular frequency. Furthermore, the digital twin generalizes to equivalence ratios at which current modelling approaches are not accurate. Fifth, to generalize the digital twin framework, we train an ESN with data from three of the four available equivalence ratios and test the real-time digital twin performance in the unseen scenario. This unified ESN successfully estimates the model bias and measurement shift in unseen thermoacoustic dynamics with the regularized bias-aware DA framework. This work opens new opportunities for real-time digital twinning of multi-physics problems.

Supplementary material

The codes used for this paper are publicly available in https://github.com/MagriLab/real-time-bias-aware-DA.

Funding

A.N. is supported partly by the EPSRC-DTP, UK, the Cambridge Commonwealth, European and International Trust, UK under a Cambridge European Scholarship, and Rolls Royce, UK. A.N. and L.M. acknowledge support from the UKRI AI for Net Zero grant EP/Y005619/1. L.M. acknowledges support from the ERC Starting Grant no. PhyCo 949388 and EU-PNRR YoungResearcher TWIN ERC-PI_0000005.

Declaration of interests

The authors report no conflict of interest.

Appendix A. The EnKF

The classical (i.e. bias-unregularized) EnKF updates each ensemble member as (Evensen Reference Evensen2003)

(A1) \begin{align} {\boldsymbol{\mathit{\psi}}}_j^{a} &= {\boldsymbol{\mathit{\psi}}}_j^{f}+\boldsymbol{\mathsf{K}}[{\boldsymbol{\mathit{d}}}_j - \boldsymbol{\mathsf{M}}{\boldsymbol{\mathit{\psi}}}_j^{f}], \quad j=0,\dots,m-1,\nonumber\\ &\quad \mathrm{with} \ \boldsymbol{\mathsf{K}} = \boldsymbol{\mathsf{C}}_{\psi\psi}^{f}\boldsymbol{\mathsf{M}}^{\mathrm{T}} (\boldsymbol{\mathsf{C}}_{dd}+\boldsymbol{\mathsf{M}}\boldsymbol{\mathsf{C}}_{\psi\psi}^{f}\boldsymbol{\mathsf{M}}^{\mathrm{T}})^{{-}1}, \end{align}

where $\boldsymbol{\mathsf{K}}$ is the Kalman gain matrix, and the superscripts ‘f’ and ‘a’ indicate ‘forecast’ and ‘analysis’, respectively. By evaluating the measurement operator products, we can write the EnKF update for the states and parameters of each ensemble member as

(A2) \begin{equation} \begin{bmatrix} {\boldsymbol{\mathit{\phi}}}_j^{a}\\ {\boldsymbol{\mathit{\alpha}}}_j^{a} \end{bmatrix} = \begin{bmatrix} {\boldsymbol{\mathit{\phi}}}_j^{f}\\ {\boldsymbol{\mathit{\alpha}}}_j^{f} \end{bmatrix} + \underbrace{\begin{bmatrix} \boldsymbol{\mathsf{C}}_{\phi q}^{f}\ \ \boldsymbol{\mathsf{C}}_{\alpha q}^{f} \end{bmatrix} (\boldsymbol{\mathsf{C}}_{dd}+ \boldsymbol{\mathsf{C}}_{qq}^{f})^{{-}1}}_{\mathrm{Kalman gain,~\boldsymbol{\mathsf{K}}}}\overbrace{({\boldsymbol{\mathit{d}}}_j - {\boldsymbol{\mathit{q}}}_j^{f})}^{\mathrm{Innovation}}. \end{equation}

As mentioned in § 1, the EnKF is a bias-unregularized method. The estate and parameter update does not consider either the model bias or the measurement shift.

Appendix B. Jacobian of the bias estimator

The Jacobian of the bias estimator is equivalent to the negative Jacobian of the ESN in the open-loop configuration, such that

(B1)\begin{equation} \boldsymbol{\mathsf{J}} =\dfrac{{\mathrm{d}}{\boldsymbol{\mathit{b}}}_{k+1}}{{\mathrm{d}}\boldsymbol{\mathsf{M}}\bar{{\boldsymbol{\mathit{\psi}}}}_k}= \dfrac{{\mathrm{d}}{\boldsymbol{\mathit{b}}}_{k+1}}{{\mathrm{d}}\bar{{\boldsymbol{\mathit{i}}}}_{k}^{a}}\dfrac{{\mathrm{d}}\bar{{\boldsymbol{\mathit{i}}}}_{k}^{a}}{{\mathrm{d}}\boldsymbol{\mathsf{M}} \bar{{\boldsymbol{\mathit{\psi}}}}_k} ={-}\dfrac{{\mathrm{d}}{\boldsymbol{\mathit{b}}}_{k+1}}{{\mathrm{d}}\bar{{\boldsymbol{\mathit{i}}}}_k}={-}\boldsymbol{\mathsf{W}}_{out}^{(1)}[\boldsymbol{\mathsf{T}}\odot(\sigma_{in}\boldsymbol{\mathsf{W}}_{in}^{(1)}\odot\boldsymbol{\mathsf{G}})], \end{equation}

where $\boldsymbol{\mathsf{W}}_{in}^{(1)}$ and $\boldsymbol{\mathsf{W}}_{out}^{(1)}$ are the first $N_q$ and $N_r$ columns of the input and output matrices, respectively, such that $\boldsymbol{\mathsf{W}} = [\boldsymbol{\mathsf{W}}^{(1)} | {\boldsymbol{w}}^{(2)}]$; $\boldsymbol{\mathsf{G}}=[{\boldsymbol {\mathit {g}}}~|\dots |~{\boldsymbol {\mathit {g}}}]^{\mathrm {T}}\in \mathbb {R}^{N_q\times N_r}$, ${\boldsymbol {\mathit {1}}}\in \mathbb {R}^{N_r}$ is a vector of ones and $\boldsymbol{\mathsf{T}}=[{\boldsymbol {\mathit {t}}}~|\dots |~{\boldsymbol {\mathit {t}}}]\in \mathbb {R}^{N_r\times N_q}$ with

(B2) \begin{equation} {\boldsymbol{\mathit{t}}}={\boldsymbol{\mathit{1}}}-\tanh^2 (\sigma_{in}\boldsymbol{\mathsf{W}}_{in}^{(1)}\left({\boldsymbol{\mathit{b}}}_k\odot{\boldsymbol{\mathit{g}}}\right)+\sigma_{in}\delta_r\boldsymbol{\mathit{{w}}}_{in}^{(2)} +\rho\boldsymbol{\mathsf{W}}{\boldsymbol{\mathit{r}}}_k). \end{equation}

For details on the derivation, the reader is referred to Nóvoa et al. (Reference Nóvoa, Racca and Magri2024).

Appendix C. Training the network

During training, the ESN is in an open-loop configuration (figure 4a). The inputs to the reservoir are the training dataset $\boldsymbol{\mathsf{U}}=[{\boldsymbol {\mathit {u}}}_0\,|\dots |\,{\boldsymbol {\mathit {u}}}_{N_{tr}-1}]$, where each time component ${\boldsymbol {\mathit {u}}}_k = [{\boldsymbol {\mathit {b}}}_{u}(t_k); {{\boldsymbol {{\bar{i}}}}}_{u}(t_k)]$ (the subscript ‘u’ indicates training data). Although we have information from the experimental data to train the network, the optimal parameters of the thermoacoustic system are unknown. Thus, we do not know a priori the model bias and measurement shift. Selecting an appropriate training dataset is essential to obtain a robust ESN that can estimate the model bias and innovations. We create a set of $L$ guesses on the bias and innovations from a single realization of the experimental data. This means that the ESN is not trained with the ‘true’ bias.

The training data generation is summarized in algorithm 1 and the procedure is as follows. First, we take measurements for a training time window $t_{tr}$ of acoustic pressure data $\boldsymbol{\mathsf{D}}_{u}$, and we estimate $\boldsymbol{\mathsf{D}}^{{\dagger} }_{u}$ by applying a band-pass filter to the raw data $\boldsymbol{\mathsf{D}}_{u}$ (see § 2). Second, we draw $L$ sets of thermoacoustic parameters from uniform random distribution with lower and upper bounds ${\boldsymbol {\mathit {\alpha }}}^0(1-\sigma _L)$ and ${\boldsymbol {\mathit {\alpha }}}^0(1+\sigma _L)$, which are selected from an educated physical initial guess on the thermoacoustic model parameters based on previous works (Faure-Beaulieu et al. Reference Faure-Beaulieu, Indlekofer, Dawson and Noiray2021b; Indlekofer et al. Reference Indlekofer, Faure-Beaulieu, Noiray and Dawson2021). The ranges of the uniform distributions are reported in Appendix D. Then, we forecast the model (2.5) to statistically stationary conditions using the $L$ sets of parameters, and we take from each time series a sample of length $t_{tr}$, with this, we have the model estimates $\boldsymbol{\mathsf{Q}}_{u}\in \mathbb {R}^{L\times N_q\times N_{tr}}$. Third, we correlate each $\boldsymbol{\mathsf{Q}}_{{u},l}$ with the data by selecting the time lag $\varkappa \geq 0$ that minimizes their normalized  r.m.s. error within the first 0.01 s (approximately 10 periods of the acoustic signal), i.e. for each $\boldsymbol{\mathsf{Q}}_l$, the time lag is $\varkappa _l = \{\varkappa \; s.t. \;\min {\mathrm {r.m.s.}(\boldsymbol{\mathsf{Q}}_{{u},l}(t-\varkappa ), \boldsymbol{\mathsf{D}}_{u}(t))} \}$ with $t\in [0, 0.01]$ s. Once the $L$ model estimates are aligned to the observations such that the r.m.s. is minimized in the initial 10 periods of oscillation, we obtain the training model bias and innovations as

(C1) \begin{equation} \boldsymbol{\mathsf{U}}_l = [\boldsymbol{\mathsf{B}}_{{u},l};\boldsymbol{\mathsf{I}}_{{u},l}]= [\boldsymbol{\mathsf{D}}^{{{\dagger}}}_{u} - \boldsymbol{\mathsf{Q}}_{{u},l}; \boldsymbol{\mathsf{D}}_{u} - \boldsymbol{\mathsf{Q}}_{{u},l}] \quad \mathrm{for} \ l = 0, \dots, L-1. \end{equation}

If our initial guess in the parameters is well defined, the training bias dataset $\boldsymbol{\mathsf{B}}_{u}$ have a small norm. However, during assimilation the system can go through different states and parameter combinations, which may have a large norm model bias. Because the ESN is only trained on the correlated signals, it may not be flexible to estimate the bias throughout the DA process Liang, Terasaki & Miyoshi (Reference Liang, Terasaki and Miyoshi2023). Therefore, fourth, we apply data augmentation (Goodfellow, Bengio & Courville Reference Goodfellow, Bengio and Courville2016) to increase the robustness of the ESN and improve the network adaptive in a real-time assimilation framework. We increase the training set by adding the bias and innovations resulting from mid-correlated signals to the training set. We select the mid-correlation time lag as the average between the best lag $\varkappa$ and the worst time lag. The total number of training time series in the proposed training method is $2L$. The training parameters used in this work are summarized in Appendix D.

Algorithm 1. Training dataset generation

Appendix D. Simulations’ parameters

This appendix summarizes the parameters used for the DA and for training the ESN. The model parameters’ ranges listed in table 2 are used to initialize the ensemble in the DA algorithm and to create the $L$-initial guesses for training the ESN (see Appendix C).

Table 2. Parameters used in the simulations.

Appendix E. Inferred thermoacoustic parameters

Table 3 details the parameters shown in figures 10–11. The reported parameters from the real-time digital twin (DT) are those with minimum r.m.s.

Table 3. Thermoacoustic parameters inferred by the minimum-r.m.s. solution (see figure 10) of the digital twin (DT) and the bias-unregularized filter (EnKF) compared with the parameters identified by Indlekofer et al. (Reference Indlekofer, Faure-Beaulieu, Dawson and Noiray2022) (ref.).

Footnotes

$^{a}$Indicates that the parameters are initialized within the given range.

$^{b}$Indicates that the parameters are optimized in the given range.

References

Aggarwal, C.C. 2018 Neural Networks and Deep Learning, vol. 10. Springer.CrossRefGoogle Scholar
Aguilar Pérez, J.G., Dawson, J.R., Schuller, T., Durox, D., Prieur, K. & Candel, S. 2021 Locking of azimuthal modes by breaking the symmetry in annular combustors. Combust. Flame 234, 111639.CrossRefGoogle Scholar
Ahn, B., Indlekofer, T., Dawson, J.R. & Worth, N.A. 2022 Heat release rate response of azimuthal thermoacoustic instabilities in a pressurized annular combustor with methane/hydrogen flames. Combust. Flame 244, 112274.CrossRefGoogle Scholar
Bauerheim, M., Nicoud, F. & Poinsot, T. 2016 Progress in analytical methods to predict and control azimuthal combustion instability modes in annular chambers. Phys. Fluids 28 (2), 021303.CrossRefGoogle Scholar
Bauerheim, M., Parmentier, J.-F., Salas, P., Nicoud, F. & Poinsot, T. 2014 An analytical model for azimuthal thermoacoustic modes in an annular chamber fed by an annular plenum. Combust. Flame 161 (5), 13741389.CrossRefGoogle Scholar
Bonavita, M. & Laloyaux, P. 2020 Machine learning for model error inference and correction. J. Adv. Model. Earth Syst. 12 (12), e2020MS002232.CrossRefGoogle Scholar
Bothien, M.R., Noiray, N. & Schuermans, B. 2015 Analysis of azimuthal thermo-acoustic modes in annular gas turbine combustion chambers. Trans. ASME J. Engng Gas Turbines Power 137 (6), 061505.CrossRefGoogle Scholar
Boujo, E., Bourquard, C., Xiong, Y. & Noiray, N. 2020 Processing time-series of randomly forced self-oscillators: the example of beer bottle whistling. J. Sound Vib. 464, 114981.CrossRefGoogle Scholar
Bourgouin, J.-F., Durox, D., Moeck, J.P., Schuller, T. & Candel, S. 2013 Self-sustained instabilities in an annular combustor coupled by azimuthal and longitudinal acoustic modes. In Proceedings of the ASME Turbo Expo 2013: Turbine Technical Conference and Exposition. Vol. 1B: Combustion, Fuels and Emissions. San Antonio, Texas, USA, 3-7 June. V01BT04A007. ASME.CrossRefGoogle Scholar
Brajard, J., Carrassi, A., Bocquet, M. & Bertino, L. 2020 Combining data assimilation and machine learning to emulate a dynamical model from sparse and noisy observations: a case study with the Lorenz 96 model. J. Comput. Sci. 44, 101171.CrossRefGoogle Scholar
Burgers, G., Jan van Leeuwen, P. & Evensen, G. 1998 Analysis scheme in the ensemble Kalman filter. Mon. Weath. Rev. 126 (6), 17191724.2.0.CO;2>CrossRefGoogle Scholar
Callaham, J.L., Loiseau, J.-C., Rigas, G. & Brunton, S.L. 2021 Nonlinear stochastic modelling with Langevin regression. Proc. R. Soc. Lond. A 477 (2250), 20210092.Google ScholarPubMed
Candel, S. 2002 Combustion dynamics and control: progress and challenges. Proc. Combust. Inst. 29 (1), 128.CrossRefGoogle Scholar
Candel, S., Durox, D., Ducruix, S., Birbaud, A.-L., Noiray, N. & Schuller, T. 2009 Flame dynamics and combustion noise: progress and challenges. Intl J. Aeroacoust. 8 (1), 156.CrossRefGoogle Scholar
Colburn, C.H., Cessna, J.B. & Bewley, T.R. 2011 State estimation in wall-bounded flow systems. Part 3. The ensemble Kalman filter. J. Fluid Mech. 682, 289303.CrossRefGoogle Scholar
Culick, F.E.C. 1988 Combustion instabilities in liquid-fuelled propulsion systems. In AGARD Conference Proceedings, vol. 450. NATO.Google Scholar
Culick, F.E.C. 2006 Unsteady motions in combustion chambers for propulsion systems. North Atlantic Treaty Organization RTO AGAR-Dograph AG-AVT-039.Google Scholar
Dee, D.P. 2005 Bias and data assimilation. Q. J. R. Meteorol. Soc. 131 (613), 33233343.CrossRefGoogle Scholar
Dee, D.P. & da Silva, A.M. 1998 Data assimilation in the presence of forecast bias. Q. J. R. Meteorol. Soc. 124 (545), 269295.CrossRefGoogle Scholar
Dee, D.P. & Todling, R. 2000 Data assimilation in the presence of forecast bias: the GEOS moisture analysis. Mon. Weath. Rev. 128 (9), 32683282.2.0.CO;2>CrossRefGoogle Scholar
Donato, L., Galletti, C. & Parente, A. 2024 Self-updating digital twin of a hydrogen-powered furnace using data assimilation. Appl. Therm. Engng 236, 121431.CrossRefGoogle Scholar
Dowling, A.P. & Morgans, A.S. 2005 Feedback control of combustion oscillations. Annu. Rev. Fluid Mech. 37 (1), 151182.CrossRefGoogle Scholar
Duran, I. & Morgans, A.S. 2015 On the reflection and transmission of circumferential waves through nozzles. J. Fluid Mech. 773, 137153.CrossRefGoogle Scholar
Durbin, J. & Koopman, S.J. 2012 Time Series Analysis by State Space Methods. Oxford University Press.CrossRefGoogle Scholar
Evensen, G. 2003 The ensemble Kalman filter: theoretical formulation and practical implementation. Ocean Dyn. 53 (4), 343367.CrossRefGoogle Scholar
Evensen, G. 2009 Data Assimilation: The Ensemble Kalman Filter. Springer Science & Business Media.CrossRefGoogle Scholar
Faure-Beaulieu, A., Indlekofer, T., Dawson, J.R. & Noiray, N. 2021 a Experiments and low-order modelling of intermittent transitions between clockwise and anticlockwise spinning thermoacoustic modes in annular combustors. Proc. Combust. Inst. 38 (4), 59435951.CrossRefGoogle Scholar
Faure-Beaulieu, A., Indlekofer, T., Dawson, J.R. & Noiray, N. 2021 b Imperfect symmetry of real annular combustors: beating thermoacoustic modes and heteroclinic orbits. J. Fluid Mech. 925, R1.CrossRefGoogle Scholar
Faure-Beaulieu, A. & Noiray, N. 2020 Symmetry breaking of azimuthal waves: slow-flow dynamics on the Bloch sphere. Phys. Rev. Fluids 5 (2), 023201.CrossRefGoogle Scholar
Friedland, B. 1969 Treatment of bias in recursive filtering. IEEE Trans. Autom. Control 14 (4), 359367.CrossRefGoogle Scholar
Gao, X., Wang, Y., Overton, N., Zupanski, M. & Tu, X. 2017 Data-assimilated computational fluid dynamics modeling of convection-diffusion-reaction problems. J. Comput. Sci. 21, 3859.CrossRefGoogle Scholar
Ghirardo, G. & Bothien, M.R. 2018 Quaternion structure of azimuthal instabilities. Phys. Rev. Fluids 3 (11), 113202.CrossRefGoogle Scholar
Ghirardo, G. & Juniper, M.P. 2013 Azimuthal instabilities in annular combustors: standing and spinning modes. Proc. R. Soc. Lond. A 469 (2157), 20130232.Google Scholar
Goodfellow, I., Bengio, Y. & Courville, A. 2016 Deep Learning. MIT.Google Scholar
Grigoryeva, L. & Ortega, J.-P. 2018 Echo state networks are universal. Neural Netw. 108, 495508.CrossRefGoogle ScholarPubMed
Haimberger, L. 2007 Homogenization of radiosonde temperature time series using innovation statistics. J. Clim. 20 (7), 13771403.CrossRefGoogle Scholar
Hansen, J.J., Brouzet, D. & Ihme, M. 2024 A normal-score ensemble Kalman filter for 1D shock waves. AIAA Paper 2024-2022.CrossRefGoogle Scholar
Huhn, F. & Magri, L. 2020 Learning ergodic averages in chaotic systems. In Computational Science – ICCS2020 (ed. V.V. Krzhizhanovskaya, G. Závodszky, M.H. Lees, J.J. Dongarra, P.M.A. Sloot, S. Brissos & J. Teixeira), pp. 124–132. Springer.Google Scholar
Humbert, S.C., Moeck, J.P., Paschereit, C.O. & Orchini, A. 2023 a Symmetry-breaking in thermoacoustics: theory and experimental validation on an annular system with electroacoustic feedback. J. Sound Vib. 548, 117376.CrossRefGoogle Scholar
Humbert, S.C., Orchini, A., Paschereit, C.O. & Noiray, N. 2023 b Symmetry breaking in an experimental annular combustor model with deterministic electroacoustic feedback and stochastic forcing. Trans. ASME J. Engng Gas Turbines Power 145 (3), 031021.CrossRefGoogle Scholar
Ignagni, M.B. 1990 Separate bias Kalman estimator with bias state noise. IEEE Trans. Autom. Control 35 (3), 338341.CrossRefGoogle Scholar
Illingworth, S.J. & Morgans, A.S. 2010 Adaptive feedback control of combustion instability in annular combustors. Combust. Sci. Technol. 182 (2), 143164.CrossRefGoogle Scholar
Indlekofer, T., Faure-Beaulieu, A., Dawson, J.R. & Noiray, N. 2022 Spontaneous and explicit symmetry breaking of thermoacoustic eigenmodes in imperfect annular geometries. J. Fluid Mech. 944, A15.CrossRefGoogle Scholar
Indlekofer, T., Faure-Beaulieu, A., Noiray, N. & Dawson, J.R. 2021 The effect of dynamic operating conditions on the thermoacoustic response of hydrogen rich flames in an annular combustor. Combust. Flame 223, 284294.CrossRefGoogle Scholar
Jaeger, H. & Haas, H. 2004 Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304 (5667), 7880.CrossRefGoogle ScholarPubMed
Juniper, M.P. & Sujith, R.I. 2018 Sensitivity and nonlinearity of thermoacoustic oscillations. Annu. Rev. Fluid Mech. 50 (1), 661689.CrossRefGoogle Scholar
Kalman, R.E. 1960 A new approach to linear filtering and prediction problems. Trans. ASME J. Basic Engng 82 (1), 3545.CrossRefGoogle Scholar
Kolouri, S., Azimi-Sadjadi, M.R. & Ziemann, A. 2013 Acoustic tomography of the atmosphere using unscented Kalman filter. IEEE Trans. Geosci. Remote Sens. 52 (4), 21592171.CrossRefGoogle Scholar
Krebs, W., Flohr, P., Prade, B. & Hoffmann, S. 2002 Thermoacoustic stability chart for high-intensity gas turbine combustion systems. Combust. Sci. Technol. 174 (7), 99128.CrossRefGoogle Scholar
Labahn, J.W., Wu, H., Coriton, B., Frank, J.H. & Ihme, M. 2019 Data assimilation using high-speed measurements and LES to examine local extinction events in turbulent flames. Proc. Combust. Inst. 37 (2), 22592266.CrossRefGoogle Scholar
Laera, D., Schuller, T., Prieur, K., Durox, D., Camporeale, S.M. & Candel, S. 2017 Flame describing function analysis of spinning and standing modes in an annular combustor and comparison with experiments. Combust. Flame 184, 136152.CrossRefGoogle Scholar
Laloyaux, P., Bonavita, M., Dahoui, M., Farnan, J., Healy, S., Hólm, E. & Lang, S.T.K. 2020 Towards an unbiased stratospheric analysis. Q. J. R. Meteorol. Soc. 146 (730), 23922409.CrossRefGoogle Scholar
Liang, J., Terasaki, K. & Miyoshi, T. 2023 A machine learning approach to the observation operator for satellite radiance data assimilation. J. Meteorol. Soc. Japan Ser. II 101 (1), 7995.CrossRefGoogle Scholar
Lieuwen, T.C. 2003 Statistical characteristics of pressure oscillations in a premixed combustor. J. Sound Vib. 260 (1), 317.CrossRefGoogle Scholar
Lieuwen, T.C. 2012 Unsteady Combustor Physics. Cambridge University Press.CrossRefGoogle Scholar
Lieuwen, T.C., Torres, H., Johnson, C. & Zinn, B.T. 2001 A mechanism of combustion instability in lean premixed gas turbine combustors. Trans. ASME J. Engng Gas Turbines Power 123 (1), 182–189.CrossRefGoogle Scholar
Lukoševičius, M. 2012 A practical guide to applying echo state networks. In Neural Networks: Tricks of the Trade (ed. G. Montavon, G.B. Orr & K-R. Müller), pp. 659–686. Springer.CrossRefGoogle Scholar
Magri, L., Bauerheim, M. & Juniper, M.P. 2016 a Stability analysis of thermo-acoustic nonlinear eigenproblems in annular combustors. Part I. Sensitivity. J. Comput. Phys. 325, 395410.CrossRefGoogle Scholar
Magri, L., Bauerheim, M., Nicoud, F. & Juniper, M.P. 2016 b Stability analysis of thermo-acoustic nonlinear eigenproblems in annular combustors. Part II. Uncertainty quantification. J. Comput. Phys. 325, 411421.CrossRefGoogle Scholar
Magri, L. & Doan, N.A.K. 2020 Physics-informed data-driven prediction of turbulent reacting flows with Lyapunov analysis and sequential data assimilation. In Data Analysis for Direct Numerical Simulations of Turbulent Combustion (ed. H. Pitsch & A. Attili), pp. 177–196. Springer.CrossRefGoogle Scholar
Magri, L., Juniper, M.P. & Moeck, J.P. 2020 Sensitivity of the Rayleigh criterion in thermoacoustics. J. Fluid Mech. 882, R1.CrossRefGoogle Scholar
Magri, L., Schmid, P.J. & Moeck, J.P. 2023 Linear flow analysis inspired by mathematical methods from quantum mechanics. Annu. Rev. Fluid Mech. 55, 541574.CrossRefGoogle Scholar
Majda, A.J. & Harlim, J. 2012 Filtering Complex Turbulent Systems. Cambridge University Press.CrossRefGoogle Scholar
Mazur, M., Kwah, Y.H., Indlekofer, T., Dawson, J.R. & Worth, N.A. 2021 Self-excited longitudinal and azimuthal modes in a pressurised annular combustor. Proc. Combust. Inst. 38 (4), 59976004.CrossRefGoogle Scholar
Mazur, M., Nygård, H.T., Dawson, J.R. & Worth, N.A. 2019 Characteristics of self-excited spinning azimuthal modes in an annular combustor with turbulent premixed bluff-body flames. Proc. Combust. Inst. 37 (4), 51295136.CrossRefGoogle Scholar
Mensah, G.A., Magri, L., Orchini, A. & Moeck, J.P. 2019 Effects of asymmetry on thermoacoustic modes in annular combustors: a higher-order perturbation study. Trans. ASME J. Engng Gas Turbines Power 141 (4), 041030.CrossRefGoogle Scholar
Moeck, J.P., Paul, M. & Paschereit, C.O. 2010 Thermoacoustic instabilities in an annular Rijke tube. In Proceedings of the ASME Turbo Expo 2010: Power for Land, Sea, and Air. Vol. 2: Combustion, Fuels and Emissions, Parts A and B. Glasgow, UK, 14–18 June, pp. 1219–1232. ASME.CrossRefGoogle Scholar
Morgans, A.S. & Stow, S.R. 2007 Model-based control of combustion instabilities in annular combustors. Combust. Flame 150 (4), 380399.CrossRefGoogle Scholar
Murthy, S.R., Sayadi, T., Le Chenadec, V., Schmid, P.J. & Bodony, D.J. 2019 Analysis of degenerate mechanisms triggering finite-amplitude thermo-acoustic oscillations in annular combustors. J. Fluid Mech. 881, 384419.CrossRefGoogle Scholar
Noiray, N. 2017 Linear growth rate estimation from dynamics and statistics of acoustic signal envelope in turbulent combustors. Trans. ASME J. Engng Gas Turbines Power 139 (4), 041503.CrossRefGoogle Scholar
Noiray, N., Bothien, M. & Schuermans, B. 2011 Investigation of azimuthal staging concepts in annular gas turbines. Combust. Theory Model. 15 (5), 585606.CrossRefGoogle Scholar
Noiray, N. & Schuermans, B. 2013 a Deterministic quantities characterizing noise driven Hopf bifurcations in gas turbine combustors. Intl J. Non-Linear Mech. 50, 152163.CrossRefGoogle Scholar
Noiray, N. & Schuermans, B. 2013 b On the dynamic nature of azimuthal thermoacoustic modes in annular gas turbine combustion chambers. Proc. R. Soc. Lond. A 469 (2151), 20120535.Google Scholar
Nóvoa, A. & Magri, L. 2022 Real-time thermoacoustic data assimilation. J. Fluid Mech. 948, A35.CrossRefGoogle Scholar
Nóvoa, A., Racca, A. & Magri, L. 2024 Inferring unknown unknowns: regularized bias-aware ensemble Kalman filter. Comput. Meth. Appl. Mech. Engng 418, 116502.CrossRefGoogle Scholar
O'Connor, J., Worth, N.A. & Dawson, J.R. 2013 Flame and flow dynamics of a self-excited, standing wave circumferential instability in a model annular gas turbine combustor. In Proceedings of the ASME Turbo Expo 2013: Turbine Technical Conference and Exposition. Vol. 1B: Combustion, Fuels and Emissions. San Antonio, Texas, USA, 3–7 June. V01BT04A063. ASME.CrossRefGoogle Scholar
Orchini, A., Magri, L., Silva, C.F., Mensah, G.A. & Moeck, J.P. 2020 Degenerate perturbation theory in thermoacoustics: high-order sensitivities and exceptional points. J. Fluid Mech. 903, A37.CrossRefGoogle Scholar
Paschereit, C.O., Gutmark, E. & Weisenstein, W. 1998 Structure and control of thermoacoustic instabilities in a gas-turbine combustor. Combust. Sci. Technol. 138 (1–6), 213232.CrossRefGoogle Scholar
Poinsot, T. 2017 Prediction and control of combustion instabilities in real engines. Proc. Combust. Inst. 36 (1), 128.CrossRefGoogle Scholar
Racca, A. & Magri, L. 2021 Robust optimization and validation of echo state networks for learning chaotic dynamics. Neural Netw. 142, 252268.CrossRefGoogle ScholarPubMed
Rayleigh, Lord 1878 The explanation of certain acoustical phenomena. Nature 18, 319321.CrossRefGoogle Scholar
Roy, A., Singh, S., Nair, A., Chaudhuri, S. & Sujith, R.I. 2021 Flame dynamics during intermittency and secondary bifurcation to longitudinal thermoacoustic instability in a swirl-stabilized annular combustor. Proc. Combust. Inst. 38 (4), 62216230.CrossRefGoogle Scholar
Schuermans, B., Paschereit, C.O. & Monkewitz, P. 2006 Non-linear combustion instabilities in annular gas-turbine combustors. In 44th AIAA Aerospace Sciences Meeting and Exhibit, Reno, Nevada, 9–12 January. AIAA Paper 2006-549.Google Scholar
Siegert, S., Friedrich, R. & Peinke, J. 1998 Analysis of data sets of stochastic systems. Phys. Lett. A 243 (5), 275280.CrossRefGoogle Scholar
da Silva, A.F.C. & Colonius, T. 2020 Flow state estimation in the presence of discretization errors. J. Fluid Mech. 890, A10.Google Scholar
Silva, C.F. 2023 Intrinsic thermoacoustic instabilities. Prog. Energy Combust. Sci. 95, 101065.CrossRefGoogle Scholar
Tarantola, A. 2005 Inverse Problem Theory and Methods for Model Parameter Estimation. Society for Industrial and Applied Mathematics.CrossRefGoogle Scholar
Traverso, T. & Magri, L. 2019 Data assimilation in a nonlinear time-delayed dynamical system with Lagrangian optimization. In Computational Science – ICCS 2019 (ed. J.M.F. Rodrigues, P.J.S. Cardoso, J. Monteiro, R. Lam, V.V. Krzhizhanovskaya, M.H. Lees, J.J. Dongarra & P.M.A. Sloot), pp. 156–168. Springer.CrossRefGoogle Scholar
Wang, P., He, C., Deng, Z. & Liu, Y. 2021 Data assimilation of flow-acoustic resonance. J. Acoust. Soc. Am. 149 (6), 41344148.CrossRefGoogle ScholarPubMed
Wolf, P., Staffelbach, G., Gicquel, L.Y.M., Müller, J.-D. & Poinsot, T. 2012 Acoustic and large eddy simulation studies of azimuthal modes in annular combustion chambers. Combust. Flame 159 (11), 33983413.CrossRefGoogle Scholar
Worth, N.A. & Dawson, J.R. 2013 a Modal dynamics of self-excited azimuthal instabilities in an annular combustion chamber. Combust. Flame 160 (11), 24762489.CrossRefGoogle Scholar
Worth, N.A. & Dawson, J.R. 2013 b Self-excited circumferential instabilities in a model annular gas turbine combustor: global flame dynamics. Proc. Combust. Inst. 34 (2), 31273134.CrossRefGoogle Scholar
Yang, D., Laera, D. & Morgans, A.S. 2019 A systematic study of nonlinear coupling of thermoacoustic modes in annular combustors. J. Sound Vib. 456, 137161.CrossRefGoogle Scholar
Yu, H., Jaravel, T., Ihme, M., Juniper, M.P. & Magri, L. 2019 a Data assimilation and optimal calibration in nonlinear models of flame dynamics. J. Engng Gas Turbines Power 141 (12), 121010.Google Scholar
Yu, H., Juniper, M.P. & Magri, L. 2019 b Combined state and parameter estimation in level-set methods. J. Comput. Phys. 399, 108950.CrossRefGoogle Scholar
Figure 0

Figure 1. Side and top views of the experimental set-up of the annular combustor. Adapted from Faure-Beaulieu et al. (2021b).

Figure 1

Figure 2. Pressure at the four measurement locations with equivalence ratio $\varPhi =0.5125$ (thermoacoustically unstable configuration). (a) Raw experimental data, i.e. the observables, and (b) post-processed data, which are briefly referred to as the presumed truth. (I) Time series of the fast-varying oscillations, (II) histogram of the acoustic pressure in the time window for DA, $t\in [0.5, 0.85]$, and (III) comparison of the power spectral density of the raw data (light colour) and the presumed truth (dark colour). The horizontal lines in (II) indicate the mean of the histograms. The thermoacoustic limit cycle is a standing acoustic mode with a nodal line near $\theta =120^\circ$ (mono-modal histogram).

Figure 2

Figure 3. Schematic of the proposed digital twin framework. (a) The physical and digital systems evolve in time (left to right) independently. The digital system is composed of a physical model with uncertain state and parameters, and an estimator of the model bias and innovations (from which the measurement shift is estimated). When sensor data (crosses) become available, they are combined with the estimates from the digital twin using the r-EnKF to update the digital system. (b) Diagram of the r-EnKF update performed sequentially every time that data become available.

Figure 3

Table 1. Summary of the terminology.

Figure 4

Figure 4. Schematic representation of the proposed ESN architecture for model bias and innovation prediction. (a) Compact architecture showing the open-loop and closed-loop configurations; and (b) unfolded architecture starting at an analysis step is at $t_k$, in which there is one open-loop step followed by a closed-loop forecast. In open loop the input to the ESN is the analysis mean innovation $\bar {{\boldsymbol {\mathit {i}}}}^{a}_{k}={\boldsymbol {\mathit {d}}}_k-\boldsymbol{\mathsf{M}}\bar {{\boldsymbol {\mathit {\psi }}}}_k^{a}$, i.e. the difference between the raw acoustic pressure data and the analysis model estimate. The outputs from the ESN are (i) the innovation at the next time step $\bar {{\boldsymbol {\mathit {i}}}}_{k+1}$ (which becomes the input to the next time step in the closed-loop configuration); and (ii) the model bias ${\boldsymbol {\mathit {b}}}_{k+1}$, i.e. an estimate of the difference between the presumed acoustic state ${\boldsymbol {\mathit {d}}}_{k+1}^{\dagger}$ and the model estimate $\boldsymbol{\mathsf{M}}\bar{\boldsymbol{\psi}}_{k+1} $.

Figure 5

Figure 5. Real-time digital twin at $\varPhi =0.5125$. Time evolution of the thermoacoustic pressure (in Pa) at $\theta =60^\circ$ using (a) the bias-unregularized EnKF and (b) the proposed r-EnKF. Comparison between the presumed ground truth (thick grey line), the raw data (thick red line), the prediction from the ensemble filter (cyan lines) and in (b) the bias-corrected mean estimate (navy dashed line). The close ups show the start and end of the assimilation window, which is indicated by the vertical dashed lines. The observations are show in red circles.

Figure 6

Figure 6. Thermoacoustic parameters with the bias-unregularized EnKF (dashed) and the r-EnKF (solid). The lines and shaded areas indicate the ensemble mean and standard deviation, respectively. Here $\varPhi =0.5125$.

Figure 7

Figure 7. Normalized r.m.s. errors between the presumed ground truth and the model prediction (cyan) and between the presumed ground truth and the bias-corrected prediction (navy) with (a) the bias-unregularized EnKF and (b) the r-EnKF. The filled histograms show the r.m.s. of each ensemble member, the thick lines show the r.m.s. of the ensemble mean and the vertical dashed line indicates the mean of the r.m.s. error. The errors are computed at four stages of the assimilation: $t\in [0.49, 0.50]$ (before DA), $t\in [0.50, 0.51]$ (start of DA), $t\in [0.84, 0.85]$ (end of DA) and $t\in [0.85, 0.86]$ (after DA). Here $\varPhi =0.5125$.

Figure 8

Figure 8. Histograms of the acoustic pressure after assimilation with (a) the bias-unregularized EnKF and (b) the r-EnKF, at the four observed azimuthal locations. Comparison between the presumed ground truth (black), the observations from raw data (red), the ensemble prediction (filled cyan) and its mean (dashed teal), and bias-corrected ensemble predictions (filled navy) and its mean (dashed light blue). The vertical lines indicate the mean of each of the distributions. Here $\varPhi =0.5125$.

Figure 9

Figure 9. Model bias and measurement shift estimates after assimilation with the r-EnKF. Comparison between the ESN inference of (a) the model bias (dashed black) and the presumed model bias (thick grey); and (b) the measurement shift estimate (dashed black), difference between the raw and post-processed data (thin grey), and its time average, $\langle {\cdot }\rangle$ (thick grey). Here $\varPhi =0.5125$.

Figure 10

Figure 10. Comparison of the estimated values of the two stability parameters $c_2\beta$ (circles) and $\nu$ (triangles) after DA, obtained with the bias-unregularized EnKF (black) and the r-EnKF with different bias regularization parameters (colour maps). Lighter colours indicate stronger regularization; larger marker sizes indicate more accurate solutions, i.e. smaller r.m.s. errors; and the error bars represent the ensemble standard deviation. The dashed lines corresponds to the parameters identified by offline Langevin regression (Indlekofer et al.2022).

Figure 11

Figure 11. Same as figure 10 for the remaining physical parameters.

Figure 12

Figure 12. Acoustic state for $\varPhi =0.5625$. (a) Bloch sphere for quaternion representation of the acoustic pressure $p(\theta, t) = A \cos {(\vartheta - \theta )} \cos {\chi } \cos {(\omega t + \varphi )} + A \sin {(\vartheta - \theta )} \sin {\chi } \sin {(\omega t + \varphi )}$, where $A$ is the slow-varying amplitude, i.e. the envelope of the acoustic pressure, $\varphi$ is the slow temporal phase drift, $\vartheta$ is the position of the anti-nodal line and $\chi$ is the nature angle (Ghirardo & Bothien 2018; Magri et al.2023). (b) State from the raw data. (c) State predicted by the proposed real-time digital twin.

Figure 13

Figure 13. Generalizability study. Estimated values of the two stability parameters $c_2\beta$ (circles) and $\nu$ (triangles) after DA with the r-EnKF using the same ESN, which is not trained with data for $\varPhi =0.5625$. Lighter colours indicate a stronger bias regularization parameter; larger marker sizes indicate more accurate solutions, i.e. smaller r.m.s. errors; and the error bars represent the ensemble standard deviation. The dashed lines correspond to the parameters identified by offline Langevin regression (Indlekofer et al.2022).

Figure 14

Figure 14. Generalizability study. Histograms of the acoustic pressure at $\varPhi =0.5625$ after assimilation with the r-EnKF, at the four observed azimuthal locations. The ESN used in the digital twin is not trained with data for $\varPhi =0.5625$. Comparison between the presumed ground truth (black), the observations from raw data (red), the ensemble prediction (filled cyan) and its mean (dashed teal), and bias-corrected ensemble predictions (filled navy) and its mean (dashed light blue). The vertical lines indicate the mean of each of the distributions.

Figure 15

Algorithm 1. Training dataset generation

Figure 16

Table 2. Parameters used in the simulations.

Figure 17

Table 3. Thermoacoustic parameters inferred by the minimum-r.m.s. solution (see figure 10) of the digital twin (DT) and the bias-unregularized filter (EnKF) compared with the parameters identified by Indlekofer et al. (2022) (ref.).