We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We consider the problem of numerical integration when the sampling nodes form a stationary point process on the real line. In previous papers it was argued that a naïve Riemann sum approach can cause a severe variance inflation when the sampling points are not equidistant. We show that this inflation can be avoided using a higher-order Newton–Cotes quadrature rule which exploits smoothness properties of the integrand. Under mild assumptions, the resulting estimator is unbiased and its variance asymptotically obeys a power law as a function of the mean point distance. If the Newton–Cotes rule is of sufficiently high order, the exponent of this law turns out to only depend on the point process through its mean point distance. We illustrate our findings with the stereological estimation of the volume of a compact object, suggesting alternatives to the well-established Cavalieri estimator.
The discontinuous Galerkin (DG) method provides a robust and flexible technique for the time integration of fractional diffusion problems. However, a practical implementation uses coefficients defined by integrals that are not easily evaluated. We describe specialized quadrature techniques that efficiently maintain the overall accuracy of the DG method. In addition, we observe in numerical experiments that known superconvergence properties of DG time stepping for classical diffusion problems carry over in a modified form to the fractional-order setting.
Effective and accurate high-degree spline interpolation is still a challenging task in today’s applications. Higher degree spline interpolation is not so commonly used, because it requires the knowledge of higher order derivatives at the nodes of a function on a given mesh.
In this article, our goal is to demonstrate the continuity of the piecewise polynomials and their derivatives at the connecting points, obtained with a method initially developed by Beaudoin (1998, 2003) and Beauchemin (2003). This new method, involving the discrete Fourier transform (DFT/FFT), leads to higher degree spline interpolation for equally spaced data on an interval
$[0,T]$
. To do this, we analyze the singularities that may occur when solving the system of equations that enables the construction of splines of any degree. We also note an important difference between the odd-degree splines and even-degree splines. These results prove that Beaudoin and Beauchemin’s method leads to spline interpolation of any degree and that this new method could eventually be used to improve the accuracy of spline interpolation in traditional problems.
The Orthogonal Least Squares (OLS) algorithm is an efficient sparse recovery algorithm that has received much attention in recent years. On one hand, this paper considers that the OLS algorithm recovers the supports of sparse signals in the noisy case. We show that the OLS algorithm exactly recovers the support of
$K$
-sparse signal
$\boldsymbol{x}$
from
$\boldsymbol{y}=\boldsymbol{\unicode[STIX]{x1D6F7}}\boldsymbol{x}+\boldsymbol{e}$
in
$K$
iterations, provided that the sensing matrix
$\boldsymbol{\unicode[STIX]{x1D6F7}}$
satisfies the restricted isometry property (RIP) with restricted isometry constant (RIC)
$\unicode[STIX]{x1D6FF}_{K+1}<1/\sqrt{K+1}$
, and the minimum magnitude of the nonzero elements of
$\boldsymbol{x}$
satisfies some constraint. On the other hand, this paper demonstrates that the OLS algorithm exactly recovers the support of the best
$K$
-term approximation of an almost sparse signal
$\boldsymbol{x}$
in the general perturbations case, which means both
$\boldsymbol{y}$
and
$\boldsymbol{\unicode[STIX]{x1D6F7}}$
are perturbed. We show that the support of the best
$K$
-term approximation of
$\boldsymbol{x}$
can be recovered under reasonable conditions based on the restricted isometry property (RIP).
In the paper the correspondence between a formal multiple power series and a special type of branched continued fractions, the so-called ‘multidimensional regular C-fractions with independent variables’ is analysed providing with an algorithm based upon the classical algorithm and that enables us to compute from the coefficients of the given formal multiple power series, the coefficients of the corresponding multidimensional regular C-fraction with independent variables. A few numerical experiments show, on the one hand, the efficiency of the proposed algorithm and, on the other, the power and feasibility of the method in order to numerically approximate certain multivariable functions from their formal multiple power series.
We give bounds on the error in the asymptotic approximation of the log-Gamma function
$\ln \unicode[STIX]{x1D6E4}(z)$
for complex
$z$
in the right half-plane. These improve on earlier bounds by Behnke and Sommer [Theorie der analytischen Funktionen einer komplexen Veränderlichen, 2nd edn (Springer, Berlin, 1962)], Spira [‘Calculation of the Gamma function by Stirling’s formula’, Math. Comp.25 (1971), 317–322], and Hare [‘Computing the principal branch of log-Gamma’, J. Algorithms25 (1997), 221–236]. We show that
$|R_{k+1}(z)/T_{k}(z)|<\sqrt{\unicode[STIX]{x1D70B}k}$
for nonzero
$z$
in the right half-plane, where
$T_{k}(z)$
is the
$k$
th term in the asymptotic series, and
$R_{k+1}(z)$
is the error incurred in truncating the series after
$k$
terms. We deduce similar bounds for asymptotic approximation of the Riemann–Siegel theta function
$\unicode[STIX]{x1D717}(t)$
. We show that the accuracy of a well-known approximation to
$\unicode[STIX]{x1D717}(t)$
can be improved by including an exponentially small term in the approximation. This improves the attainable accuracy for real
$t>0$
from
$O(\exp (-\unicode[STIX]{x1D70B}t))$
to
$O(\exp (-2\unicode[STIX]{x1D70B}t))$
. We discuss a similar example due to Olver [‘Error bounds for asymptotic expansions, with an application to cylinder functions of large argument’, in: Asymptotic Solutions of Differential Equations and Their Applications (ed. C. H. Wilcox) (Wiley, New York, 1964), 16–18], and a connection with the Stokes phenomenon.
A new minimization principle for the Poisson equation using two variables – the solution and the gradient of the solution – is introduced. This principle allows us to use any conforming finite element spaces for both variables, where the finite element spaces do not need to satisfy the so-called inf–sup condition. A numerical example demonstrates the superiority of this approach.
In this paper we consider the algorithm for recovering sparse orthogonal polynomials using stochastic collocation via ℓq minimization. The main results include: 1) By using the norm inequality between ℓq and ℓ2 and the square root lifting inequality, we present several theoretical estimates regarding the recoverability for both sparse and non-sparse signals via ℓq minimization; 2) We then combine this method with the stochastic collocation to identify the coefficients of sparse orthogonal polynomial expansions, stemming from the field of uncertainty quantification. We obtain recoverability results for both sparse polynomial functions and general non-sparse functions. We also present various numerical experiments to show the performance of the ℓq algorithm. We first present some benchmark tests to demonstrate the ability of ℓq minimization to recover exactly sparse signals, and then consider three classical analytical functions to show the advantage of this method over the standard ℓ1 and reweighted ℓ1 minimization. All the numerical results indicate that the ℓq method performs better than standard ℓ1 and reweighted ℓ1 minimization.
The computation efficiency of high dimensional (3D and 4D) B-spline interpolation, constructed by classical tensor product method, is improved greatly by precomputing the B-spline function. This is due to the character of NLT code, i.e. only the linearised characteristics are needed so that the unperturbed orbit as well as values of the B-spline function at interpolation points can be precomputed at the beginning of the simulation. By integrating this fixed point interpolation algorithm into NLT code, the high dimensional gyro-kinetic Vlasov equation can be solved directly without operator splitting method which is applied in conventional semi-Lagrangian codes. In the Rosenbluth-Hinton test, NLT runs a few times faster for Vlasov solver part and converges at about one order larger time step than conventional splitting code.
In this paper we introduce high dimensional tensor product interpolation for the combination technique. In order to compute the sparse grid solution, the discrete numerical subsolutions have to be extended by interpolation. If unsuitable interpolation techniques are used, the rate of convergence is deteriorated. We derive the necessary framework to preserve the error structure of high order finite difference solutions of elliptic partial differential equations within the combination technique framework. This strategy enables us to obtain high order sparse grid solutions on the full grid. As exemplifications for the case of order four we illustrate our theoretical results by two test examples with up to four dimensions.
Iterative Filtering (IF) is an alternative technique to the Empirical Mode Decomposition (EMD) algorithm for the decomposition of non–stationary and non–linear signals. Recently in [3] IF has been proved to be convergent for any L2 signal and its stability has been also demonstrated through examples. Furthermore in [3] the so called Fokker–Planck (FP) filters have been introduced. They are smooth at every point and have compact supports. Based on those results, in this paper we introduce the Multidimensional Iterative Filtering (MIF) technique for the decomposition and time–frequency analysis of non–stationary high–dimensional signals. We present the extension of FP filters to higher dimensions. We prove convergence results under general sufficient conditions on the filter shape. Finally we illustrate the promising performance of MIF algorithm, equipped with high–dimensional FP filters, when applied to the decomposition of two dimensional signals.
This paper is concerned with the construction of high order mass-lumping finite elements on simplexes and a program for computing mass-lumping finite elements on triangles and tetrahedra. The polynomial spaces for mass-lumping finite elements, as proposed in the literature, are presented and discussed. In particular, the unisolvence problem of symmetric point-sets for the polynomial spaces used in mass-lumping elements is addressed, and an interesting property of the unisolvent symmetric point-sets is observed and discussed. Though its theoretical proof is still lacking, this property seems to be true in general, and it can greatly reduce the number of cases to consider in the computations of mass-lumping elements. A program for computing mass-lumping finite elements on triangles and tetrahedra, derived from the code for computing numerical quadrature rules presented in [7], is introduced. New mass-lumping finite elements on triangles found using this program with higher orders, namely 7, 8 and 9, than those available in the literature are reported.
A novel mesh deformation technique is developed based on the Delaunay graph mapping method and the inverse distance weighting (IDW) interpolation. The algorithm maintains the advantages of the efficiency of Delaunay graph mapping mesh deformation while it also possesses the ability of better controlling the near surface mesh quality. The Delaunay graph is used to divide the mesh domain into a number of sub-domains. On each sub-domain, the inverse distance weighting interpolation is applied, resulting in a similar efficiency as compared to the fast Delaunay graph mapping method. The paper will show how the near-wall mesh quality is controlled and improved by the new method
Motivated by the magneto hydrodynamic (MHD) simulation for Tokamaks with Isogeometric analysis, we present splines defined over a rectangular mesh with a complex topological structure, i.e., with extraordinary vertices. These splines are piecewise polynomial functions of bi-degree (d,d) and parameter continuity. And we compute their dimension and exhibit basis functions called Hermite bases for bicubic spline spaces. We investigate their potential applications for solving partial differential equations (PDEs) over a physical domain in the framework of Isogeometric analysis. For instance, we analyze the property of approximation of these spline spaces for the L2-norm; we show that the optimal approximation order and numerical convergence rates are reached by setting a proper parameterization, although the fact that the basis functions are singular at extraordinary vertices.
The nonlinear Dirac equation is an important model in quantum physics with a set of conservation laws and a multi-symplectic formulation. In this paper, we propose energy-preserving and multi-symplectic wavelet algorithms for this model. Meanwhile, we evidently improve the efficiency of these algorithms in computations via splitting technique and explicit strategy. Numerical experiments are conducted during long-term simulations to show the excellent performances of the proposed algorithms and verify our theoretical analysis.
Through appropriate choices of elements in the underlying iterated function system, the methodology of fractal interpolation enables us to associate a family of continuous self-referential functions with a prescribed real-valued continuous function on a real compact interval. This procedure elicits what is referred to as an α-fractal operator on , the space of all real-valued continuous functions defined on a compact interval I. With an eye towards connecting fractal functions with other branches of mathematics, in this paper we continue to investigate the fractal operator in more general spaces such as the space of all bounded functions and the Lebesgue space , and in some standard spaces of smooth functions such as the space of k-times continuously differentiable functions, Hölder spaces and Sobolev spaces . Using properties of the α-fractal operator, the existence of Schauder bases consisting of self-referential functions for these function spaces is established.
This paper is concerned with numerical approximations of a nonlocal heat equation define on an infinite domain. Two classes of artificial boundary conditions (ABCs) are designed, namely, nonlocal analog Dirichlet-to-Neumann-type ABCs (global in time) and high-order Padé approximate ABCs (local in time). These ABCs reformulate the original problem into an initial-boundary-value (IBV) problem on a bounded domain. For the global ABCs, we adopt a fast evolution to enhance computational efficiency and reduce memory storage. High order fully discrete schemes, both second-order in time and space, are given to discretize two reduced problems. Extensive numerical experiments are carried out to show the accuracy and efficiency of the proposed methods.
Even though there are various fast methods and preconditioning techniques available for the simulation of Poisson problems, little work has been done for solving Poisson's equation by using the Helmholtz decomposition scheme. To bridge this issue, we propose a novel efficient algorithm to solve Poisson's equation in irregular two dimensional domains for electrostatics through a quasi-Helmholtz decomposition technique—the loop-tree basis decomposition. It can handle Dirichlet, Neumann or mixed boundary problems in which the filling media can be homogeneous or inhomogeneous. A novel point of this method is to first find the electric flux efficiently by applying the loop-tree basis functions. Subsequently, the potential is obtained by finding the inverse of the gradient operator. Furthermore, treatments for both Dirichlet and Neumann boundary conditions are addressed. Finally, the validation and efficiency are illustrated by several numerical examples. Through these simulations, it is observed that the computational complexity of our proposed method almost scales as , where N is the triangle patch number of meshes. Consequently, this new algorithm is a feasible fast Poisson solver.
The computation of integrals in higher dimensions and on general domains, when no explicit cubature rules are known, can be ”easily” addressed by means of the quasi-Monte Carlo method. The method, simple in its formulation, becomes computationally inefficient when the space dimension is growing and the integration domain is particularly complex. In this paper we present two new approaches to the quasi-Monte Carlo method for cubature based on nonnegative least squares and approximate Fekete points. The main idea is to use less points and especially good points for solving the system of the moments. Good points are here intended as points with good interpolation properties, due to the strict connection between interpolation and cubature. Numerical experiments show that, in average, just a tenth of the points should be used mantaining the same approximation order of the quasi-Monte Carlo method. The method has been satisfactory applied to 2 and 3-dimensional problems on quite complex domains.