To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 2: Linearly independent lists of vectors that span a vector space are of special importance. They provide a bridge between the abstract world of vector spaces and the concrete world of matrices. They permit us to define the dimension of a vector space and motivate the concept of matrix similarity.
Chapter 1: In this chapter, we provide formal definitions of real and complex vector spaces, and many examples. Among the important concepts introduced are linear combinations, span, linear independence, and linear dependence.
Chapter 6: In this chapter, we explore the role of orthonormal (orthogonal and normalized) vectors in an inner-product space. Matrix representations of linear transformations with respect to orthonormal bases are of particular importance. They are associated with the notion of an adjoint transformation. We give a brief introduction to Fourier series that highlights the orthogonality properties of sine and cosine functions. In the final section of the chapter, we discuss orthogonal polynomials and the remarkable numerical integration rules associated with them.
Chapter 13: In this chapter, we discuss several problems in which a similarity transformation to Jordan canonical form facilitates a solution. For example, we find that A is similar to AT; limp→∞Ap = 0 if and only if every eigenvalue of A has modulus less than 1; and the invertible Jordan blocks of AB and BA are the same. We begin by considering coupled systems of ordinary differential equations.
Chapter 3: A matrix is not just an array of scalars. It can be thought of as an array of submatrices in many different ways. We begin by regarding a matrix as an array of columns and we explore some implications of this viewpoint for matrix products and Cramer's rule. We turn to arrays of rows, which lead to additional insights for matrix products. We discuss determinants of block matrices, block versions of elementary matrices, and Cauchy's formula for the determinant of a bordered matrix. Finally, we introduce the Kronecker product, which provides a way to construct block matrices with a special structure.
Chapter 12: In the preceding chapter, we found that each square complex matrix A is similar to a direct sum of upper triangular unispectral matrices. We now show that A is similar to a direct sum of Jordan blocks (unispectral upper bidiagonal matrices with 1s in the superdiagonal) that is unique up to permutation of its direct summands.
Chapter 19: In this chapter, we introduce new examples of norms, with special attention to submultiplicative norms on matrices. These norms are well-adapted to applications involving power series of matrices and iterative numerical algorithms. We use them to prove a formula for the spectral radius that is the key to a fundamental theorem on positive matrices in the next chapter.
Chapter 17: In this chapter, we investigate applications and consequences of the singular value decomposition. For example, it provides a systematic way to approximate a matrix by a matrix of lower rank. It also permits us to define a generalized inverse for matrices that are not invertible (and need not even be square). The singular value decomposition has a pleasant special form for complex symmetric matrices. The largest singular value is especially important; it turns out to be a norm (the spectral norm) on matrices. We use the spectral norm to study how the solution of a linear system changes if the system is perturbed, and how the eigenvalues of a matrix can change if it is perturbed.
Chapter 5: Many abstract concepts that make linear algebra a powerful mathematical tool have their roots in plane geometry, so we begin the study of inner product spaces with a review of basic properties of lengths and angles in the real two-dimensional plane. Guided by these geometrical properties, we formulate axioms for inner products and norms, which provide generalized notions of length (norm) and perpendicularity (orthogonality) in abstract vector spaces.
Chapter 8: Many problems in applied mathematics involve finding a minimum-norm solution or a best approximation, subject to certain constraints. Orthogonal subspaces arise frequently in solving such problems. Among the topics we discuss in this chapter are the minimum-norm solution to a consistent linear system, a least-squares solution to an inconsistent linear system, and orthogonal projections.
Chapter 4: In this chapter, we collect some important facts about matrices: the rank-nullity theorem; the intersection and sum of column spaces; rank inequalities for sums and products of matrices; the LU factorization and solutions of linear systems; row equivalence, the pivot column decomposition, and the reduced row echelon form. In a final capstone section, we use linear dependence, the trace, block matrices, induction, and similarity to characterize matrices that are commutators. Throughout the chapter, we emphasize block-matrix methods.
Chapter 7: Unitary matrices play important roles in theory and computation. The adjoint of a unitary matrix is its inverse, so unitary matrices are easy to invert. They preserve lengths and angles, and have remarkable stability properties in many numerical algorithms. In this chapter, we explore the properties of unitary matrices and present several special cases. We derive an explicit formula for a unitary matrix whose first column is given. We give a constructive proof of the QR factorization and show that every square complex matrix is unitarily similar to an upper Hessenberg matrix.