To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter is devoted to the study of finite element methods in one and two dimensions. We begin by presenting the general theory of Galerkin methods and their analysis; in particular Galerkin orthogonality and Cea’s lemma are introduced in an abstract setting. Then the construction of finite element spaces, and their bases, in one dimension is detailed. The notions of mesh and hat basis functions are introduced here. The general theory of Galerkin approximations is then used to reduced the error analysis of finite element schemes to a question in approximation theory. The properties of the Lagrange interpolant in Sobolev spaces (in one dimension) then close the argument. Duality techniques, i.e. Nitsche’s trick are then used to obtain optimal error estimates in L2. The same ideas are presented, mostly without proof, for the finite element scheme in two dimensions.
Periodic differential equations and their approximation are the topic of this chapter. We discuss the application of classical finite difference schemes and their analysis in this setting. The spectral Galerkin method, that is, using trigonometric polynomials as basis is then discussed and we show its spectral accuracy. Finally the pseudo-spectral method is presented, its implementation via the DFT is discussed.
In this chapter we begin the study of best approximations. In this case we study the best (min) polynomial approximation in the uniform (max) norm. The existence a best approximating polynomial is first presented. The more subtle issue of uniqueness is then discussed. To show uniqueness the celebrated de la Vallee Poussin, and Chebyshev equi-oscillation theorems are presented. A first error estimate is then presented. The problems of interpolation, discussed in the previous chapter, and best approximation are then related via the Lebesgue constant. Chebyshev polynomials are then introduced, and their most relevant properties presented. Interpolation at Chebyshev nodes, and the mitigation of the Runge phenomenon are then discussed. Finally; Bernstein polynomials; moduli of continuity and smoothness; are detailed in order to study Weierstrass approximation theorem.
We prove the equivalence of the solution to a linear system of equations with an HPD matrix to the problem of quadratic minimization. With the help of this equivalence, we study the minimization of a quadratic energy and introduce gradient descent methods with exact and approximate line search, we study the preconditioned steepest descent method. We introduce the conjugate gradient method, with preconditioning, as a Galerkin approximation over Krylov subspaces and show its convergence. For systems with non HPD matrices we discuss the GMRES method.
We present the classical theory of linear iterative schemes for linear systems of equations: the Richardson, Jacobi, Gauss–Seidel, and relaxation methods are presented and analyzed. We introduce the Householder-John criterion for convergence of iterative schemes. The symmetrization and symmetric iterations are presented for the relaxation scheme. Some nonstationary methods, like minimal residuals, Chebyshev iterations, and minimal corrections are presented.
This chapter studies finite difference methods for elliptic problems. It begins with a rather lengthy and general discussion of grid domains, grid functions, finite difference operators, and their consistency. We then introduce the notion of stability of a finite difference scheme and Lax’s principle: a consistent and stable scheme is convergent. Then we apply all these notions to elliptic operators in one and two dimensions, with the main focus being the Laplacian. We show the discrete maximum principle, energy arguments and how these can be used to attain stability and convergence in various norms. For more general operators we introduce the notions of homogeneous schemes and upwinding. For operators in divergence form we provide an analysis via energy arguments. For non divergence form operators we analyze the monotonicity and comparison principles of the arising schemes.
We introduce Runge-Kutta methods and their Butcher tableau. We discuss necessary order conditions, and thoroughly analyze some two and three stage schemes. We then discuss the class of Runge-Kutta collocation methods and their consistency. In particular we present the class of Gauss-Legendre-Runge-Kutta methods and their order. Finally, we study how to approximate dissipative equations via so-called dissipative schemes: the M-matrix of a scheme and schemes of B(q) and C(q) types.
In this chapter we study finite difference schemes for parabolic partial differential equations. The notions of conditional and unconditional stability, and CFL condition are introduced to analyze the classical schemes for the heat equation. Different techniques, like maximum principles and energy arguments are presented to obtain stability in different norms. Then, we turn to the study of the pure initial value problem, the grand goal being to discuss the von Neumann stability analysis. To accomplish this we introduce the notions of Fourier-Z transform of grid functions and the symbol of a finite difference scheme. This allows us to state the von Neumann stability condition and prove that it is necessary and sufficient for stability. These notions are also used to present a covergence analysis that is somewhat different than the one presented in previous sections.
We study the solution of overdetermined systems of equations. Introduce weak, and in particular least squares solutions. For full rank systems, we show existence and uniqueness via the normal equations. We introduce projection matrices and the QR factorization. We discuss the computation of the QR factorization with the help of Householder reflectors. For rank defficient systems we prove the existence and uniqueness of a minimal norm least squares solution. We introduce the Moore-Penrose pseudoinverse, show how it relates to the SVD, and how it can be used to solve rank defficient systems.
This appendix collects a review of the calculus and analysis in one and several variables that the reader should be familiar with. Notions of convergence, continuity, differentiability and integrability are recalled here.
We present several facts about the natural transformations between vector spaces, and their representations via matrices. We introduce induced matrix norms, and the spectral decomposition of nondefective matrices
This chapter presents all the needed theoretical background regarding the initial value problem for a first order ordinary differential equation in finite dimensions. Local and global existence, uniqueness, and continuous dependence on data are presented. The discussion then turns to stability of solutions. We discuss the flow map and the Alekseev-Grobner Lemma. Dissipative equations. and a discussion of Lyapunov stability of fixed points conclude the chapter.
We study the problem of polynomial interpolation. Its solution with the Vandermonde matrix, and with a Lagrange nodal basis are then presented, and error estimates are provided. The Runge phenomenon is then illustrated. Hermite interpolation then is studied, its solution is given, and error estimates are provided. The problem of Lagrange interpolation is then generalized to the case of holomorphic functions on the complex plane, and error estimates are provided. A more efficient construction, via divided differences, is then given for the interpolating polynomial. We extend the notion of divided differences, in order to use them to provide error estimates for polynomial interpolation.
We study the problem of approximating the value of a (weighted) integral of a function. We introduce the concepts of a quadrature rule and its consistency. Interpolatory quadrature rules and their construction are then discussed. Their error analysis, based on notions of previous chapters, is then presented. Then an error analysis based on the Peano Kernel Theorem and scaling arguments is developed. Newton-Cotes formulas then are developed and analyzed. Then, composite quadrature rules are presented and their use is illustrated. Their analysis is presented then, on the basis of Euler-Maclaurin formulas. The chapter concludes with a discussion of Gaussian quadrature formulas, their properties and optimality.
This chapter serves two purposes: it introduces several essential concepts of linear and nonlinear functional analysis that will be used in subsequent chapters and, as an illustration of them, studies the problem of unconstrained minimization of a convex functional. All the necessary notions of existence, uniqueness, and optimality conditions are presented and analyzed. Preconditioned gradient descent methods for strongly convex, locally Lipschitz smooth objectives in infinite dimensions are then presented and analyzed. A general framework to show linear convergence in this setting is then presented. The preconditioned steepest descent with exact and approximate line searches are then analyzed using the same framework. Finally, the application of Newton’s method to the Euler equations is discussed. The local convergence is shown, and how to achieve global convergence is briefly discussed.
This chpater is dedicated to the solution of nonlinear systems of equations, that is finding roots of functions. We begin with a classification of roots into simple and non-simple, and a few words about their stability. Then we begin with some of the simplest methods for root finding: bisection, false position, fixed point iterations, and its variants. For all these schemes, we provide sufficient conditions for them to be well defined and convergent. A detailed analysis of Newton’s method, and its variants (collectively known as quasi-Newton methods), in one dimension is then presented. We show sufficient conditions for its local and global quadratic convergence, as well as how to proceed in the case of non simple roots. Then we present Newton’s method in several dimensions, and show its local quadratic convergence, including the celebrated Kantorovich’s theorem.
Collocation methods for elliptic problems are discussed here. We begin by providing their definition. For their analysis we first introduce a weighted weak formulation of the problem, and show that it is well posed. Then, we introduce and analyze a Galerkin approximation for this problem, where the subspace consists of polynomials that vanish sufficiently fast at the boundary. Next, a scheme with quadrature is proposed, and its analysis is provided using the theory of variational crimes and Strang lemmas. For its implementation and analysis the discrete cosine and Chebyshev transforms are introduced and analyzed. The phenomenon of aliasing is briefly discussed. Finally, we connect the weighted Galerkin approximation with quadrature to collocation methods, thus providing an analysis of collocation schemes.