To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To save this article to your Kindle, first ensure email@example.com is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
There have been great efforts on the development of higher-order numerical schemes for compressible Euler equations in recent decades. The traditional test cases proposed thirty years ago mostly target on the strong shock interactions, which may not be adequate enough for evaluating the performance of current higher-order schemes. In order to set up a higher standard for the development of new algorithms, in this paper we present a few benchmark cases with severe and complicated wave structures and interactions, which can be used to clearly distinguish different kinds of higher-order schemes. All tests are selected so that the numerical settings are very simple and any higher order scheme can be straightforwardly applied to these cases. The examples include highly oscillatory solutions and the large density ratio problem in one dimensional case. In two dimensions, the cases include hurricane-like solutions; interactions of planar contact discontinuities with asymptotic large Mach number (the composite of entropy wave and vortex sheets); interaction of planar rarefaction waves with transition from continuous flows to the presence of shocks; and other types of interactions of two-dimensional planar waves. To get good performance on all these cases may push algorithm developer to seek for new methodology in the design of higher-order schemes, and improve the robustness and accuracy of higher-order schemes to a new level of standard. In order to give reference solutions, the fourth-order gas-kinetic scheme (GKS) will be used to all these benchmark cases, even though the GKS solutions may not be very accurate in some cases. The main purpose of this paper is to recommend other CFD researchers to try these cases as well, and promote further development of higher-order schemes.
In this paper, we discuss the blowup of Volterra integro-differential equations (VIDEs) with a dissipative linear term. To overcome the fluctuation of solutions, we establish a Razumikhin-type theorem to verify the unboundedness of solutions. We also introduce leaving-times and arriving-times for the estimation of the spending-times of solutions to ∞. Based on these two typical techniques, the blowup and global existence of solutions to VIDEs with local and global integrable kernels are presented. As applications, the critical exponents of semi-linear Volterra diffusion equations (SLVDEs) on bounded domains with constant kernel are generalized to SLVDEs on bounded domains and ℝN with some local integrable kernels. Moreover, the critical exponents of SLVDEs on both bounded domains and the unbounded domain ℝN are investigated for global integrable kernels.
PHT-splines are a type of polynomial splines over hierarchical T-meshes which posses perfect local refinement property. This property makes PHT-splines useful in geometric modeling and iso-geometric analysis. Current implementation of PHT-splines stores the basis functions in Bézier forms, which saves some computational costs but consumes a lot of memories. In this paper, we propose a de Boor like algorithm to evaluate PHT-splines provided that only the information about the control coefficients and the hierarchical mesh structure is given. The basic idea is to represent a PHT-spline locally in a tensor product B-spline, and then apply the de-Boor algorithm to evaluate the PHT-spline at a certain parameter pair. We perform analysis about computational complexity and memory costs. The results show that our algorithm takes about the same order of computational costs while requires much less amount of memory compared with the Bézier representations. We give an example to illustrate the effectiveness of our algorithm.
In this paper we consider the algorithm for recovering sparse orthogonal polynomials using stochastic collocation via ℓq minimization. The main results include: 1) By using the norm inequality between ℓq and ℓ2 and the square root lifting inequality, we present several theoretical estimates regarding the recoverability for both sparse and non-sparse signals via ℓq minimization; 2) We then combine this method with the stochastic collocation to identify the coefficients of sparse orthogonal polynomial expansions, stemming from the field of uncertainty quantification. We obtain recoverability results for both sparse polynomial functions and general non-sparse functions. We also present various numerical experiments to show the performance of the ℓq algorithm. We first present some benchmark tests to demonstrate the ability of ℓq minimization to recover exactly sparse signals, and then consider three classical analytical functions to show the advantage of this method over the standard ℓ1 and reweighted ℓ1 minimization. All the numerical results indicate that the ℓq method performs better than standard ℓ1 and reweighted ℓ1 minimization.
This paper is devoted to numerical methods for mean-field stochastic differential equations (MSDEs). We first develop the mean-field Itô formula and mean-field Itô-Taylor expansion. Then based on the new formula and expansion, we propose the Itô-Taylor schemes of strong order γ and weak order η for MSDEs, and theoretically obtain the convergence rate γ of the strong Itô-Taylor scheme, which can be seen as an extension of the well-known fundamental strong convergence theorem to the mean-field SDE setting. Finally some numerical examples are given to verify our theoretical results.
This paper is devoted to the American option pricing problem governed by the Black-Scholes equation. The existence of an optimal exercise policy makes the problem a free boundary value problem of a parabolic equation on an unbounded domain. The optimal exercise boundary satisfies a nonlinear Volterra integral equation and is solved by a high-order collocation method based on graded meshes. This free boundary is then deformed to a fixed boundary by the front-fixing transformation. The boundary condition at infinity (due to the fact that the underlying asset's price could be arbitrarily large in theory), is treated by the perfectly matched layer technique. Finally, the resulting initial-boundary value problems for the option price and some of the Greeks on a bounded rectangular space-time domain are solved by a finite element method. In particular, for Delta, one of the Greeks, we propose a discontinuous Galerkin method to treat the discontinuity in its initial condition. Convergence results for these two methods are analyzed and several numerical simulations are provided to verify these theoretical results.
This paper presents a heuristic Learning-based Non-Negativity Constrained Variation (L-NNCV) aiming to search the coefficients of variational model automatically and make the variation adapt different images and problems by supervised-learning strategy. The model includes two terms: a problem-based term that is derived from the prior knowledge, and an image-driven regularization which is learned by some training samples. The model can be solved by classical ε-constraint method. Experimental results show that: the experimental effectiveness of each term in the regularization accords with the corresponding theoretical proof; the proposed method outperforms other PDE-based methods on image denoising and deblurring.
A numerical time-stepping algorithm for differential or partial differential equations is proposed that adaptively modifies the dimensionality of the underlying modal basis expansion. Specifically, the method takes advantage of any underlying low-dimensional manifolds or subspaces in the system by using dimensionality-reduction techniques, such as the proper orthogonal decomposition, in order to adaptively represent the solution in the optimal basis modes. The method can provide significant computational savings for systems where low-dimensional manifolds are present since the reduction can lower the dimensionality of the underlying high-dimensional system by orders of magnitude. A comparison of the computational efficiency and error for this method are given showing the algorithm to be potentially of great value for high-dimensional dynamical systems simulations, especially where slow-manifold dynamics are known to arise. The method is envisioned to automatically take advantage of any potential computational saving associated with dimensionality-reduction, much as adaptive time-steppers automatically take advantage of large step sizes whenever possible.
High order total variation (TV2) and ℓ1 based (TV2L1) model has its advantage over the TVL1 for its ability in avoiding the staircase; and a constrained model has the advantage over its unconstrained counterpart for simplicity in estimating the parameters. In this paper, we consider solving the TV2L1 based magnetic resonance imaging (MRI) signal reconstruction problem by an efficient alternating direction method of multipliers. By sufficiently utilizing the problem's special structure, we manage to make all subproblems either possess closed-form solutions or can be solved via Fast Fourier Transforms, which makes the cost per iteration very low. Experimental results for MRI reconstruction are presented to illustrate the effectiveness of the new model and algorithm. Comparisons with its recent unconstrained counterpart are also reported.
Anisotropic mesh adaptation is studied for linear finite element solution of 3D anisotropic diffusion problems. The 𝕄-uniform mesh approach is used, where an anisotropic adaptive mesh is generated as a uniform one in the metric specified by a tensor. In addition to mesh adaptation, preservation of the maximum principle is also studied. Some new sufficient conditions for maximum principle preservation are developed, and a mesh quality measure is defined to server as a good indicator. Four different metric tensors are investigated: one is the identity matrix, one focuses on minimizing an error bound, another one on preservation of the maximum principle, while the fourth combines both. Numerical examples show that these metric tensors serve their purposes. Particularly, the fourth leads to meshes that improve the satisfaction of the maximum principle by the finite element solution while concentrating elements in regions where the error is large. Application of the anisotropic mesh adaptation to fractured reservoir simulation in petroleum engineering is also investigated, where unphysical solutions can occur and mesh adaptation can help improving the satisfaction of the maximum principle.