Hostname: page-component-848d4c4894-wzw2p Total loading time: 0 Render date: 2024-04-30T18:39:06.978Z Has data issue: false hasContentIssue false

Symmetric and antisymmetric tensor products for the function-theoretic operator theorist

Published online by Cambridge University Press:  22 December 2023

Stephan Ramon Garcia
Affiliation:
Department of Mathematics and Statistics, Pomona College, 610 North College Avenue, Claremont, CA 91711, United States e-mail: stephan.garcia@pomona.edu
Ryan O’Loughlin*
Affiliation:
School of Mathematics, University of Leeds, Leeds LS2 9JT, United Kingdom Current address: Département de mathématiques et de statistique, Université Laval, Québec City, QC G1V 0A6, Canada e-mail: R.OLoughlin@leeds.ac.uk
Jiahui Yu
Affiliation:
Department of Mathematics, Massachusetts Institute of Technology, Simons Building, 77 Massachusetts Avenue, Cambridge, MA 02139-4307, United States e-mail: jiahu878@mit.edu
Rights & Permissions [Opens in a new window]

Abstract

We study symmetric and antisymmetric tensor products of Hilbert-space operators, focusing on norms and spectra for some well-known classes favored by function-theoretic operator theorists. We pose many open questions that should interest the field.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of The Canadian Mathematical Society

1 Introduction

Tensor products and their symmetrization have appeared in the literature since the mid-nineteenth century, such as in Riemann’s foundational work on differential geometry [Reference Riemann26, Reference Riemann27]. Tensors describe many-body quantum systems [Reference Naber24] and symmetric tensors underpin the foundations of general relativity [Reference Carroll3]. In a separate yet overlapping vein, multilinear algebra [Reference Greub16] and representation theory [Reference Fulton and Harris11] utilize symmetric tensor product spaces.

Decomposing a symmetric tensor into a minimal linear combination of tensor powers of the same vector arises in mobile communications, machine learning, factor analysis of k-way arrays, biomedical engineering, psychometrics, and chemometrics (see [Reference Comon4, Reference Comon and Rajih6, Reference De Lathauwer, De Moor and Vandewalle9, Reference Sidiropoulos, Bro and Giannakis30, Reference Smilde, Geladi and Bro33] and the references therein). We refer the reader to [Reference Comon, Golub, Lim and Mourrain5] for a study of this decomposition problem. Symmetric tensors also arise in statistics [Reference McCullagh23].

In quantum mechanics, many-body systems are represented in terms of tensor products of wave functions. In the simplest case, where the systems do not interact, the Hamiltonian of the many-body system corresponds to a symmetric tensor product of operators [Reference Kostrikin and Manin20, Chapter 4, Section 9]. Recently, there has been an endeavor within the physics community to study self-adjoint extensions of symmetric tensor products of operators [Reference Ibort, Marmo and Pérez-Pardo18, Reference Ibort and Pérez-Pardo19, Reference Lenz, Weinmann and Wirth22]. Furthermore, the symmetric part of a quantum geometric tensor can be exploited as a tool to detect quantum phase transitions in $\mathcal {PT}$ -symmetric quantum mechanics [Reference Zhang, Wang and Gong34].

Unfortunately, there is little literature about symmetric tensor products of non-normal operators. The purpose of this paper is to introduce the basic ideas to the function-theoretic operator theory community. We study some fundamental operator-theoretic questions in this area, such as finding the norm and spectrum of symmetric tensor products of operators. We work through some examples with familiar operators, such as the unilateral shift, its adjoint, and diagonal operators. Given the ramifications of symmetric tensor products in a broad spectrum of fields, we hope that initiating this study will shed new light on classical problems and lead to new directions of study for function-theoretic operator theorists.

The layout of this paper is as follows. Section 2 introduces symmetric and antisymmetric tensor power spaces, the domains for the operators in Section 3. In Section 4, we collect results on operator-theoretic properties of symmetric tensor products of bounded operators. The materials in Sections 24 are known, but perhaps difficult for the function-theoretic operator theorist to locate in one place. More novel material occupies Sections 59, although it is possible that some of the contents of Section 5 have appeared before. Section 5 is devoted to the norms of symmetric tensor powers of operators, while Section 6 focuses on the spectrum. We study symmetric tensor products of diagonal operators in Section 7, the forward and backward shift operators in Section 8, and the symmetric tensor product of shifts and diagonal operators in Section 9. We conclude in Section 10 with a host of open questions that should appeal to researchers in function-theoretic operator theory.

2 Symmetric and antisymmetric tensor power spaces

Symmetric and antisymmetric tensor powers are familiar in mathematical physics, but less so in function-theoretic operator theory. We summarize the basics, with abbreviated explanations or without proof (see [Reference Bhatia1, Section I.5] or [Reference Simon32, Section 3.8] for the details).

Let $\mathcal {H}$ be a complex Hilbert space, in which the inner product $\langle \cdot , \cdot \rangle $ is linear in the first argument and conjugate linear in the second. We assume that $\mathcal {H}$ has a countable orthonormal basis. A superscript $^-$ denotes the closure with respect to the norm of $\mathcal {H}$ .

Let $\mathcal {B}(\mathcal {H})$ denote the space of bounded linear operators on $\mathcal {H}$ . For $\mathbf {u}_1, \mathbf {u}_2 \ldots , \mathbf {u}_n \in \mathcal {H}$ , the simple tensor $\mathbf {u}_1 \otimes \mathbf {u}_2 \otimes \cdots \otimes \mathbf {u}_n : \mathcal {H}^n \to \mathbb {C}$ acts as follows:

$$ \begin{align*} (\mathbf{u}_1 \otimes \mathbf{u}_2 \otimes \cdots \otimes \mathbf{u}_n)(\mathbf{v}_1,\mathbf{v}_2, \ldots, \mathbf{ v}_n) =\langle \mathbf{u}_1, \mathbf{v}_1 \rangle \langle \mathbf{u}_2, \mathbf{v}_2 \rangle \cdots \langle \mathbf{ u}_n, \mathbf{v}_n \rangle. \end{align*} $$

A simple tensor is a conjugate-multilinear function of its arguments. The map taking an n-tuple in $\mathcal {H}^n$ to the corresponding simple tensor is linear in each argument.

Let $\mathcal {H}^{ \widehat {\otimes }n}$ denote the $\mathbb {C}$ -vector space spanned by the simple tensors. There is a unique inner product on $\mathcal {H}^{ \widehat {\otimes }n} $ such that

(2.1) $$ \begin{align} \langle\mathbf{u}_1 \otimes \mathbf{u}_2 \otimes \cdots \otimes \mathbf{u}_n, \, \mathbf{v}_1 \otimes \mathbf{ v}_2 \otimes \cdots \otimes \mathbf{v}_n \rangle :=\langle \mathbf{u}_1, \mathbf{v}_1 \rangle \langle \mathbf{u}_2, \mathbf{v}_2 \rangle \cdots\langle \mathbf{u}_n, \mathbf{v}_n \rangle \end{align} $$

for all $\mathbf {u}_1, \mathbf {u}_2, \ldots , \mathbf {u}_n ,\mathbf {v}_1, \mathbf {v}_2, \ldots , \mathbf {v}_n\in \mathcal {H}$ [Reference Simon32, Proposition 3.8.2]. Moreover,

$$ \begin{align*} \| \mathbf{u}_1 \otimes \mathbf{u}_2 \otimes \cdots \otimes \mathbf{u}_n \| = \| \mathbf{u}_1 \| \| \mathbf{u}_2 \| \cdots \| \mathbf{u}_n \|. \end{align*} $$

Definition 2.2 (Tensor powers of Hilbert spaces)

Let $\mathcal {H}^{\otimes 0} := \mathbb {C}$ . For $n=1,2,\ldots ,$ let $\mathcal {H}^{\otimes n}$ denote the completion of $\mathcal {H}^{ \widehat {\otimes }n}$ with respect to the inner product (2.1).

For $n=2$ , we may write $\mathcal {H}\otimes \mathcal {H}$ instead of $\mathcal {H}^{\otimes 2}$ . If $\mathbf {e}_1, \mathbf {e}_2,\ldots $ is an orthonormal basis for $\mathcal {H}$ , then $\mathbf {e}_{i_1} \otimes \mathbf {e}_{i_2} \otimes \cdots \otimes \mathbf {e}_{i_n}$ for $(i_1,i_2,\ldots ,i_n) \in \mathbb {N}^n$ is an orthonormal basis for $\mathcal {H}^{\otimes n}$ . Here, $\mathbb {N} := \{1,2,3,\ldots \}$ denotes the set of natural numbers.

Let $\Sigma _n$ be the group of permutations of $[n]:=\{1,2, \ldots ,n\}$ . For all $\pi \in \Sigma _n$ and $\mathbf {u}_1,\mathbf {u}_2, \ldots , \mathbf {u}_n \in \mathcal {H}$ , define

$$ \begin{align*} \widehat{\pi}(\mathbf{u}_1 \otimes \mathbf{u}_2 \otimes \cdots \otimes \mathbf{u}_n ) :=\mathbf{u}_{\pi (1)} \otimes \mathbf{u}_{\pi (2)}\otimes \cdots \otimes \mathbf{u}_{\pi (n)}. \end{align*} $$

The density of the span of the simple tensors ensures that $\widehat {\pi }$ extends to a bounded linear map on $\mathcal {H}^{\otimes n}$ .

Proposition 2.3 Let $\pi ,\tau \in \Sigma _n$ . (a) $\widehat {\pi \tau } = \widehat {\pi } \widehat {\tau }$ . (b) The map $\widehat {\pi }$ on $\mathcal {H}^{\otimes n}$ is unitary.

Proof (a) Since the span of the simple tensors is dense in $\mathcal {H}^{\otimes n}$ , it suffices to observe that

$$ \begin{align*} \widehat{\pi \tau }(\mathbf{u}_1\otimes \mathbf{u}_2 \otimes \cdots \otimes \mathbf{u}_n ) &=\mathbf{u}_{(\pi \tau) (1)}\otimes \mathbf{u}_{(\pi \tau) (2)}\otimes \cdots \otimes \mathbf{u}_{(\pi \tau) (n)} \\ &=\widehat{\pi} (\mathbf{u}_{ \tau (1)} \otimes \mathbf{u}_{\tau (2)}\otimes \cdots \otimes \mathbf{u}_{ \tau (n)}) \\ &=\widehat{\pi} ( \widehat{\tau} (\mathbf{u}_1 \otimes \mathbf{u}_2 \otimes \cdots \otimes \mathbf{u}_n ) ) \end{align*} $$

for any $ \mathbf {u}_1,\mathbf {u}_2,\ldots , \mathbf {u}_{n} \in \mathcal {H}$ .

(b) For any $ \mathbf {u}_1,\mathbf {u}_2,\ldots , \mathbf {u}_{n},\mathbf {v}_1, \mathbf {v}_2 ,\ldots , \mathbf {v}_n \in \mathcal {H}$ , (2.1) ensures that

$$ \begin{align*} & \langle\widehat{\pi}(\mathbf{v}_1\otimes \mathbf{v}_2\otimes \cdots \otimes \mathbf{v}_n), \, \mathbf{u}_1 \otimes \mathbf{u}_2 \otimes \cdots \otimes \mathbf{u}_n\rangle \\ &\qquad=\langle\mathbf{v}_{\pi(1)}\otimes \mathbf{v}_{\pi(2)}\otimes \cdots \otimes \mathbf{v}_{\pi(n)}, \, \mathbf{u}_1 \otimes \mathbf{u}_2 \otimes \cdots \otimes \mathbf{u}_n\rangle \\ &\qquad= \prod_{i=1}^{n} \langle \mathbf{v}_{\pi(i)}, \mathbf{u}_i \rangle = \prod_{j=1}^{n} \langle \mathbf{v}_{j}, \mathbf{u}_{\pi^{-1}(j)} \rangle \\ &\qquad=\langle\mathbf{v}_{1}\otimes \mathbf{v}_2 \otimes \cdots \otimes \mathbf{v}_{n}, \, \mathbf{ u}_{\pi^{-1}(1)} \otimes \mathbf{u}_{\pi^{-1}(2)} \otimes \cdots \otimes \mathbf{u}_{\pi^{-1}(n)}\rangle \\ &\qquad= \big\langle\mathbf{v}_1\otimes \mathbf{v}_2 \otimes \cdots \otimes \mathbf{v}_n, \, \widehat{\pi^{-1}}(\mathbf{u}_1 \otimes \mathbf{u}_2 \otimes \cdots \otimes \mathbf{u}_n)\big\rangle. \end{align*} $$

Therefore, $\widehat {\pi }^* = \widehat {\pi ^{-1}}$ and hence $\widehat {\pi }^{-1} = \widehat {\pi }^*$ by (a).

We now define certain subspaces of $\mathcal {H}^{\otimes n}$ that respect the action of the operators $\widehat {\pi }$ .

Definition 2.4 (Symmetric and antisymmetric tensor powers of Hilbert spaces)

Let $\operatorname {sgn}\pi $ denote the sign of a permutation $\pi \in \Sigma _n$ .

  1. (a) Let ${\mathcal {H}}^{\odot 1} :={\mathcal {H}}$ and ${\mathcal {H}}^{\odot n} :=\{ \mathbf {v} \in {\mathcal {H}}^{\otimes n}: \widehat {\pi }(\mathbf {v})= \mathbf {v} \text { for all } \pi \in \Sigma _n \}$ for $n \geqslant 2$ .

  2. (b) Let ${\mathcal {H}}^{\wedge 1} := \{ \mathbf {0}\}$ and ${\mathcal {H}}^{\wedge n} :=\{ \mathbf {v} \in {\mathcal {H}}^{\otimes n}: \widehat {\pi }(\mathbf {v})= (-1)^{\text {sgn} \pi }\mathbf {v} \text { for all } \pi \in \Sigma _n \}$ for $n \geqslant 2$ .

We may write $\mathcal {H} \odot \mathcal {H}$ and $\mathcal {H} \wedge \mathcal {H}$ instead of $\mathcal {H}^{\odot 2}$ and $\mathcal {H}^{\wedge 2}$ , respectively. In this case, there is only one nonidentity $\pi \in \Sigma _2$ .

Example 2.5 Let $H^2(\mathbb {D})$ denote the Hardy space on the unit disk $\mathbb {D}$ . The monomials $1,z,z^2,\ldots $ are an orthonormal basis for $H^2(\mathbb {D})$ , so the simple tensors $z^i \otimes z^j$ for $i,j=0,1,\ldots $ are an orthonormal basis for $H^2(\mathbb {D}) \otimes H^2(\mathbb {D})$ . The unitary map $z^i \otimes z^j \mapsto z^i w^j$ identifies $H^2(\mathbb {D}) \otimes H^2(\mathbb {D})$ with $H^2(\mathbb {D}^2)$ , the Hardy space on the bidisk $\mathbb {D}^2$ [Reference Douglas and Yang10]. Thus, we identify $H^2(\mathbb {D}) \odot H^2(\mathbb {D})$ and $H^2(\mathbb {D}) \wedge H^2(\mathbb {D})$ with

(2.6) $$ \begin{align} H^2_{\operatorname{sym}}(\mathbb{D}^2) := \{ f(z,w) \in H^2(\mathbb{D}^2) : f(z,w) = f(w,z) \text{ for all } z,w \in \mathbb{D} \} \end{align} $$

and

(2.7) $$ \begin{align} H^2_{\operatorname{asym}}(\mathbb{D}^2) := \{ f(z,w) \in H^2(\mathbb{D}^2) : f(z,w) = - f(w,z) \text{ for all } z,w \in \mathbb{D} \}, \end{align} $$

respectively. We freely use these identifications in what follows.

Definition 2.8 (Symmetrization and antisymmetrization operators)

Define $\mathrm {A}_n: \mathcal {H}^{\otimes n} \to \mathcal {H}^{\otimes n}$ and $\mathrm {S}_n: \mathcal {H}^{\otimes n}\to \mathcal {H}^{\otimes n}$ by

$$ \begin{align*} \mathrm{S}_{n}:=\frac{1}{n !} \sum_{\pi \in \Sigma_{n}} \widehat{\pi} \qquad \text{and} \qquad \mathrm{A}_{n}:=\frac{1}{n !} \sum_{\pi \in \Sigma_{n}}\text{sgn}({\pi}) \widehat{\pi}. \end{align*} $$

Proposition 2.9

  1. (a) ${\mathrm {S}}_n$ is the orthogonal projection from ${\mathcal {H}}^{\otimes n}$ onto ${\mathcal {H}}^{\odot n}$ .

  2. (b) ${\mathrm {A}}_n$ is the orthogonal projection from ${\mathcal {H}}^{\otimes n}$ onto ${\mathcal {H}}^{\wedge n}$ .

In particular, $\mathcal {H}^{\odot n}$ and $\mathcal {H}^{\wedge n}$ are closed subspaces of $\mathcal {H}^{\otimes n}$ .

Proof (a) Use Proposition 2.3 and the fact that $\widehat {\pi }\mathrm {S}_n = \mathrm {S}_n$ for all $\pi \in \Sigma _n$ to show that $\mathrm {S}_n^2 = \mathrm {S}_n = \mathrm {S}_n^*$ and $\operatorname {ran} \mathrm {S}_n = \mathcal {H}^{\odot n}$ . The proof of (b) is similar.

Let $\mathbf {v}_1, \mathbf {v}_2,\ldots , \mathbf {v}_n\in \mathcal {H}$ and define the simple symmetric and antisymmetric tensors

$$ \begin{align*} \mathbf{v}_1 \odot \mathbf{v}_2 \odot \cdots \odot \mathbf{v}_n &:=\mathrm{S}_n ( \mathbf{v}_1 \otimes \mathbf{v}_2 \otimes \cdots \otimes \mathbf{v}_n) \quad \text{and}\\ \mathbf{v}_1 \wedge \mathbf{v}_2 \wedge \cdots \wedge \mathbf{v}_n &:=\mathrm{A}_n ( \mathbf{v}_1 \otimes \mathbf{v}_2 \otimes \cdots \otimes \mathbf{v}_n). \end{align*} $$

A factor of $1/\sqrt {n!}$ is included in some sources [Reference Simon32, (3.8.33)] and $\vee $ is sometimes used instead of $\odot $ . Note that $\mathbf {v}_1 \wedge \mathbf {v}_2 \wedge \cdots \wedge \mathbf {v}_n = \mathbf {0}$ if $\mathbf {v}_i = \mathbf {v}_j$ for some $i \neq j$ .

Proposition 2.10 Let $\mathbf {e}_1, \mathbf {e}_2, \mathbf {e}_3,\ldots $ be an orthonormal basis for $\mathcal {H}$ .

  1. (a) $\textbf {e}_{i_{1}} {\odot } \, \mathbf {e}_{i_{2}}\, {\odot }\, {\cdots } \, {\odot } \, \mathbf {e}_{{i}_{n}}$ for $1 {\leqslant } i_{1} {\leqslant } i_{2} {\leqslant } {\cdots } {\leqslant } i_{n}$ form an orthogonal basis for ${\mathcal {H}}^{\odot n}$ .

  2. (b) $\textbf {e}_{i_1} \wedge \mathbf {e}_{i_2} \wedge \cdots \wedge \mathbf {e}_{i_n}$ for $1 < i_1 < i_2 < \cdots < i_n$ form an orthogonal basis for ${\mathcal {H}}^{\wedge n}$ .

We say “orthogonal” instead of “orthonormal” because the vectors described in the previous proposition need not be unit vectors. Let $m_{\ell }$ denote the number of occurrences of $\ell $ in $(i_1,i_2,\ldots ,i_n) \in [d]^n$ . Then there are ${m_1! m_2! \cdots m_d!}$ permutations of $\mathbf {e}_{i_1} \otimes \mathbf {e}_{i_2} \otimes \cdots \otimes \mathbf {e}_{i_n}$ that give rise to the same simple tensor. Thus,

$$ \begin{align*} \| \mathbf{e}_{i_1} \odot \mathbf{e}_{i_2} \odot \cdots \odot \mathbf{e}_{i_n} \| = \bigg(\frac{m_1! m_2! \cdots m_r!} {n!}\bigg)^{1/2}. \end{align*} $$

If $\dim \mathcal {H} = d$ is finite, then (using the notation for binomial coefficients)

$$ \begin{align*} \dim \mathcal{H}^{\odot n} =\binom{d+n-1}{n} \quad \text{and} \quad \dim \mathcal{H}^{\wedge n} = \begin{cases} \binom{d}{n}, & \text{if } n \leqslant d,\\[2pt] 0, & \text{if } n> d. \end{cases} \end{align*} $$

The case $n=2$ is special since $\dim \mathcal {H}^{\otimes 2} = d^2 = \binom {d+1}{2} + \binom {d}{2} = \dim \mathcal {H}^{\odot 2} + \dim \mathcal {H}^{\wedge 2}$ , which suggests Proposition 2.11. The simple symmetric and antisymmetric tensors are

$$ \begin{align*} \mathbf{u}\odot \mathbf{v}=\tfrac{1}{2}( \mathbf{u}\otimes \mathbf{v} + \mathbf{v} \otimes \mathbf{u}) \in \mathcal{H}^{\odot 2} \quad \text{and}\quad \mathbf{u}\wedge \mathbf{v}=\tfrac{1}{2}( \mathbf{u}\otimes \mathbf{v} - \mathbf{v} \otimes \mathbf{u}) \in \mathcal{H}^{\wedge 2} \end{align*} $$

for $\mathbf {u}, \mathbf {v} \in \mathcal {H}$ . If $\mathbf {e}_1, \mathbf {e}_2, \mathbf {e}_3,\ldots $ is an orthonormal basis for $\mathcal {H}$ , then

  1. (a) $\sqrt {2}(\mathbf {e}_i \odot \mathbf {e}_j)$ for $i < j$ and $\mathbf {e}_i \odot \mathbf {e}_i$ for $i \geqslant 1$ form an orthonormal basis for $\mathcal {H}^{\odot 2}$ , and

  2. (b) $\sqrt {2}(\mathbf {e}_i \wedge \mathbf {e}_j)$ for $i < j$ form an orthonormal basis for $\mathcal {H}^{\wedge 2}$ .

Proposition 2.11 $\mathcal {H}^{\otimes 2}= \mathcal {H}^{\odot 2} \oplus \mathcal {H}^{\wedge 2}$ is an orthogonal decomposition.

Proof Let $\pi $ be the nonidentity permutation in $\Sigma _2$ . If $\mathbf {x} \in \mathcal {H}^{\otimes 2}$ , then

$$ \begin{align*} \mathbf{x} =\underbrace{\tfrac{1}{2}(\mathbf{x}+\widehat{\pi}(\mathbf{x})) }_{\mathrm{S}_2(\mathbf{x}) \in \mathcal{H}^{\odot 2}} +\underbrace{\tfrac{1}{2}(\mathbf{x}-\widehat{\pi}(\mathbf{x})) }_{\mathrm{A}_2(\mathbf{x}) \in \mathcal{H}^{\wedge 2}} \end{align*} $$

and hence $\mathcal {H}^{\otimes 2}= \mathcal {H}^{\odot 2} + \mathcal {H}^{\wedge 2}$ . Since $\widehat {\pi }$ is unitary (Proposition 2.3) and involutive (since $\widehat {\pi }^2 =I$ ), it is self-adjoint. Let $\mathbf {u} \in \mathcal {H}^{\odot 2}$ and $\mathbf {v} \in \mathcal {H}^{ \wedge 2}$ , then $ \langle \mathbf {u}, \mathbf {v}\rangle =\langle \widehat {\pi }\mathbf {u},\mathbf {v}\rangle =\langle \mathbf {u},\widehat {\pi }\mathbf {v} \rangle =\langle \mathbf {u},-\mathbf {v}\rangle =-\langle \mathbf { u},\mathbf {v}\rangle $ , so $\langle \mathbf {u},\mathbf {v}\rangle =0$ . Thus, $\mathcal {H}^{\odot 2}\cap \mathcal {H}^{\wedge 2} =\{ \mathbf {0} \}$ , and $\mathcal {H}^{\odot 2} \perp \mathcal {H}^{\wedge 2}$ .

Example 2.12 Recall from Example 2.5 the identification of $H^2(\mathbb {D}) \otimes H^2(\mathbb {D})$ with $H^2(\mathbb {D}^2)$ . The orthogonal decomposition of Proposition 2.11 becomes

(2.13) $$ \begin{align} H^2(\mathbb{D}^2) = H^2_{\operatorname{sym}}(\mathbb{D}^2) \oplus H^2_{\operatorname{asym}}(\mathbb{D}^2), \end{align} $$

where the direct summands are defined by (2.6) and (2.7), respectively. In this context, $z^i w^i$ and $( z^i w^j + z^j w^i)/\sqrt {2}$ for $0 \leqslant i < j$ form an orthonormal basis for $H^2_{\operatorname {sym}}(\mathbb {D}^2)$ and $(z^i w^j - z^j w^i)/\sqrt {2}$ for $i < j$ form an orthonormal basis for $H^2_{\operatorname {asym}}(\mathbb {D}^2)$ .

Lemma 2.14 If $\sum _{i \leqslant j} | a_{ij}|^2 < \infty $ , then $\sum _{i \leqslant j} a_{ij} \mathbf {e}_i \odot \mathbf {e}_j \in \mathcal {H} \odot \mathcal {H} $ .

Proof Proposition (2.10) ensures that

$$ \begin{align*} \Big\|\sum_{i \leqslant j} a_{ij} \mathbf{e}_i \odot \mathbf{e}_j\Big\|^2 = \Big\|\sum_{i < j} \frac{a_{ij}}{\sqrt{2}} \sqrt{2} \mathbf{e}_i \odot \mathbf{e}_j + \sum_{i = 1}^{\infty} a_{ii} \mathbf{e}_i \odot \mathbf{e}_i \Big\|^2 \leqslant \sum_{i \leqslant j} |a_{ij}|^2 < \infty.\\[-37pt] \end{align*} $$

Lemma 2.15 For $\mathbf {u}, \mathbf {v} \in \mathcal {H}$ , we have $\frac {1}{\sqrt {2}}\| \mathbf {u} \| \| \mathbf {v} \| \leqslant \| \mathbf {u} \odot \mathbf {v} \| \leqslant \| \mathbf {u} \| \| \mathbf {v} \|$ ; both inequalities are sharp. In particular, the symmetric tensor product of two nonzero vectors is nonzero.

Proof The Cauchy–Schwarz inequality provides the upper inequality since

(2.16) $$ \begin{align} \| \mathbf{u} \odot \mathbf{v} \|^2 &= \tfrac{1}{4} \| \mathbf{u} \otimes \mathbf{v} + \mathbf{v} \otimes \mathbf{u} \|^2 = \tfrac{1}{4} \langle \mathbf{u} \otimes \mathbf{v} + \mathbf{v} \otimes \mathbf{u}, \, \mathbf{u} \otimes \mathbf{v} + \mathbf{v} \otimes \mathbf{u} \rangle\nonumber \\&= \tfrac{1}{4} ( \| \mathbf{u} \otimes \mathbf{v} \|^2 + \| \mathbf{v} \otimes \mathbf{u} \|^2+ |\langle \mathbf{u} \otimes \mathbf{v}, \mathbf{v} \otimes \mathbf{u}\rangle |^2)\nonumber \\&= \tfrac{1}{4} (2\| \mathbf{u} \|^2 \| \mathbf{v} \|^2 + 2 |\langle \mathbf{u}, \mathbf{v} \rangle|^2 )\\& \leqslant \tfrac{1}{4} (2\| \mathbf{u} \|^2 \| \mathbf{v} \|^2 + 2\| \mathbf{u} \|^2 \| \mathbf{v} \|^2) \nonumber \\&= \|\mathbf{u} \|^2 \|\mathbf{v} \|^2.\nonumber \end{align} $$

In (2.16), $|\langle \mathbf {u}, \mathbf {v} \rangle |^2$ is nonnegative, so we obtain the lower inequality. The upper inequality is sharp if $\mathbf {u} = \mathbf {v}$ and the lower inequality is sharp if $\mathbf {u} \perp \mathbf {v}$ .

3 Symmetric and antisymmetric tensor products of operators

For $A_1, A_2, \ldots , A_n \in \mathcal {B}(\mathcal {H})$ , define $A_1 \otimes A_2 \otimes \dots \otimes A_n$ on simple tensors by

$$ \begin{align*} (A_1 \otimes A_2 \otimes \dots \otimes A_n) (\mathbf{v}_1 \otimes \mathbf{v}_2 \otimes \cdots \otimes \mathbf{ v}_n) = A_1 \mathbf{v}_1 \otimes A_2 \mathbf{v}_2 \otimes \cdots \otimes A_n \mathbf{v}_n. \end{align*} $$

This extends by linearity to the linear span $\mathcal {H}^{ \widehat {\otimes }n}$ of the simple tensors. The density of $\mathcal {H}^{\widehat {\otimes }n}$ in $\mathcal {H}^{\otimes n}$ ensures that $A_1 \otimes A_2 \otimes \dots \otimes A_n$ has a unique bounded extension to $\mathcal {H}^{\otimes n}$ , also denoted $A_1 \otimes A_2 \otimes \dots \otimes A_n$ , which satisfies [Reference Simon32, (3.8.17)]:

(3.1) $$ \begin{align} \| A_1 \otimes A_2 \otimes \dots \otimes A_n \| = \| A_1 \| \| A_2 \| \cdots \|A_n \|. \end{align} $$

We may write $A^{\otimes n}$ instead of $A \otimes A \otimes \cdots \otimes A$ (n times).

Proposition 3.2 Let $A_1, A_2, \ldots , A_n \in \mathcal {B}(\mathcal {H})$ . Then $\mathcal {H}^{\odot n}$ and $\mathcal {H}^{\wedge n}$ are invariant under

$$ \begin{align*} \mathrm{S}_n(A_1,A_2,\ldots,A_n) = \frac{1}{n!}\sum_{\pi\in \Sigma_n} (A_{\pi(1)} \otimes A_{\pi(2)} \otimes \dots \otimes A_{\pi(n)}) \in \mathcal{B}(\mathcal{H}^{\otimes n}). \end{align*} $$

Proof Let $T = \mathrm {S}_n(A_1,A_2,\ldots ,A_n)$ . For $\mathbf {v}_1, \mathbf {v}_2, \ldots , \mathbf {v}_n \in \mathcal {H}$ ,

$$ \begin{align*} &T ( \mathbf{v}_1 \odot \mathbf{v}_2 \odot \cdots \odot \mathbf{v}_n ) = \frac{1}{n!}\sum_{\pi\in \Sigma_n}(A_{\pi(1)} \otimes A_{\pi(2)} \dots \otimes A_{\pi(n)}) (\mathbf{v}_1 \odot \mathbf{v}_2 \odot \cdots \odot \mathbf{v}_n) \\ &\qquad= \frac{1}{n!}\sum_{\pi\in \Sigma_n}(A_{\pi(1)} \otimes A_{\pi(2)} \otimes \dots \otimes A_{\pi(n)}) \bigg( \frac{1}{n !} \sum_{\tau \in \Sigma_{n}} \mathbf{v}_{\tau(1)} \otimes \mathbf{v}_{\tau(2)} \otimes \cdots \otimes \mathbf{v}_{\tau(n)} \bigg) \\[-5pt] &\qquad= \frac{1}{(n!)^2}\sum_{\pi\in \Sigma_n} \sum_{\tau \in \Sigma_{n}} (A_{\pi(1)}\mathbf{v}_{\tau(1)} \otimes A_{\pi(2)}\mathbf{v}_{\tau(2)} \otimes \dots \otimes A_{\pi(n)}\mathbf{v}_{\tau(n)}) \\ &\qquad= \frac{1}{n!}\sum_{\sigma\in \Sigma_n} A_{\sigma(1)}\mathbf{v}_{1} \odot A_{\sigma(2)}\mathbf{v}_{2} \odot \dots \odot A_{\sigma(n)}\mathbf{v}_{n} , \end{align*} $$

a sum of elements in $\mathcal {H}^{\odot n}$ . The density of the simple symmetric tensors in $\mathcal {H}^{\odot n}$ ensures that $T \mathcal {H}^{\odot n} \subseteq \mathcal {H}^{\odot n}$ . The proof that $T \mathcal {H}^{\wedge n} \subseteq \mathcal {H}^{\wedge n}$ is similar.

The proposition above suggests the following definition.

Definition 3.3 (Symmetric tensor products of operators)

Let $A_1, A_2, \ldots , A_n \in \mathcal {B}(\mathcal {H})$ . Then $A_1 \odot A_2 \odot \cdots \odot A_n$ and $A_1 \wedge A_2 \wedge \cdots \wedge A_n$ are the restrictions of

$$ \begin{align*} \mathrm{S}_n(A_1,A_2,\ldots,A_n) = \frac{1}{n!}\sum_{\pi\in \Sigma_n} (A_{\pi(1)} \otimes A_{\pi(2)} \otimes \dots \otimes A_{\pi(n)}) \end{align*} $$

to $\mathcal {H}^{\odot n}$ and $\mathcal {H}^{\wedge n}$ , respectively. We may write $A^{\odot n}$ and $A^{\wedge n}$ instead of $A \odot A \odot \cdots \odot A$ (n times) and $A \wedge A \wedge \cdots \wedge A$ (n times), respectively.

Symmetric tensor products are permutation invariant:

$$ \begin{align*} A_1 \odot A_2 \odot \cdots \odot A_n = A_{\pi(1)} \odot A_{\pi(2)} \odot \cdots \odot A_{\pi(n)} \quad \text{for all } \pi \in \Sigma_n. \end{align*} $$

If $A, B , C \in \mathcal {B}(\mathcal {H})$ , then the domain of $A \odot B$ is $\mathcal {H} \odot \mathcal {H}$ , which is not equal to $\mathcal {H}$ , the domain of C. Thus, $( A \odot B ) \odot C$ is not well defined. Note that $I \odot I \odot \cdots \odot I = I$ .

Proposition 3.4 (a) For all $A_1, A_2, \ldots , A_n \in \mathcal {B} (\mathcal {H})$ , we have $\| A_1 \odot A_2 \odot \cdots \odot A_n \| \leqslant \|A_1\| \|A_2 \| \cdots \| A_n \|$ . (b) For all $A \in \mathcal {B}(\mathcal {H})$ , we have $\| A^{\odot n} \| = \| A \|^n$ .

Proof (a) Since $ A_1 \odot A_2 \odot \cdots \odot A_n $ is the restriction of $ \frac {1}{n!}\sum _{\pi \in \Sigma _n}(A_{\pi (1)} \otimes A_{\pi (2)} \otimes \dots \otimes A_{\pi (n)}) $ to $\mathcal {H}^{\odot n}$ , its norm is at most

$$ \begin{align*} \Big\| \frac{1}{n!}\sum_{\pi\in \Sigma_n}(A_{\pi(1)} \otimes \dots \otimes A_{\pi(n)}) \Big\| &\leqslant \frac{1}{n!}\sum_{\pi\in \Sigma_n}\|(A_{\pi(1)} \otimes A_{\pi(2)} \otimes \dots \otimes A_{\pi(n)}) \| \\ &= \frac{1}{n!}\sum_{\pi\in \Sigma_n}\|A_{\pi(1)} \| \| A_{\pi(2)} \| \cdots \| A_{\pi(n)} \| \\ &= \frac{1}{n!}\sum_{\pi\in \Sigma_n}\|A_{1} \| \| A_{2} \| \dots \| A_{n} \| \\ &= \|A_1\| \|A_2 \| \ldots \| A_n \|. \end{align*} $$

(b) First, we have $\| A^{\odot n} \| \leqslant \| A \|^n$ from (a). Then

$$ \begin{align*} \|A \|^n = \sup_{ \substack{ \mathbf{v} \in \mathcal{H} \\ \| \mathbf{v}\| = 1}} \|A\mathbf{v} \|^n = \sup_{ \substack{ \mathbf{v} \in \mathcal{H} \\ \| \mathbf{v}\| = 1}} \| A^{\otimes n} (\mathbf{v} \otimes \mathbf{v} \otimes \cdots \otimes \mathbf{v}) \| \leqslant \| A^{\odot n} \|. \\[-47pt] \end{align*} $$

Example 3.5 If $A,B \in \mathcal {B}(\mathcal {H})$ , then Propositions 2.11 and 3.2 ensure that

(3.6) $$ \begin{align} \frac{1}{2}( A \otimes B + B \otimes A) = \begin{bmatrix} A \odot B & 0 \\ 0 & A \wedge B \\ \end{bmatrix} : \begin{bmatrix} \mathcal{H} \odot \mathcal{H} \\ \mathcal{H} \wedge \mathcal{H} \end{bmatrix} \to \begin{bmatrix} \mathcal{H} \odot \mathcal{H} \\ \mathcal{H} \wedge \mathcal{H} \end{bmatrix}. \end{align} $$

Example 3.7 Let $\mathcal {H} = H^2( \mathbb {D})$ and let $T_g: H^2(\mathbb {D}) \to H^2( \mathbb {D})$ be the Toeplitz operator with symbol $g \in L^{\infty }(\mathbb {D})$ . Then (3.1) ensures that $T_{g} \otimes T_{g} : H^2(\mathbb {D}^2) \to H^2(\mathbb {D}^2)$ , the linear extension of the map $z^i w^j \mapsto T_{g}(z^i) T_{g}(w^j)$ , has norm $\|g\|_{\infty }^2$ . Proposition 3.4 says that $T_{g} \odot T_{g}$ , the restriction of $T_{g} \otimes T_{g}$ to $H^2_{\operatorname {sym}}(\mathbb {T}^2)$ , also has norm $\|g\|_{\infty }^2$ .

4 Basic properties

In this section, we collect some results on the operator-theoretic properties of symmetric tensor products of bounded Hilbert-space operators.

Lemma 4.1 $(A \odot B)(C\odot D)=\frac {1}{2}(AC\odot BD+AD\odot BC)$ for $A,B,C,D \in \mathcal {B}(\mathcal {\mathcal {H}})$ .

Proof Restrict $\frac {1}{2} (A\otimes B+B\otimes A ) \frac {1}{2} (C\otimes D+D\otimes C) = \frac {1}{4}(AC \otimes BD+ BD \otimes AC+ AD \otimes BC+ BC \otimes AD)$ to $\mathcal {H} \odot \mathcal {H}$ and obtain the desired formula.

Example 4.2 Equip $\mathbb {C}^2$ with the standard basis $\mathbf {e}_1,\mathbf {e}_2$ and consider

(4.3) $$ \begin{align} A = \big[\begin{smallmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{smallmatrix}\big] \quad \text{and} \quad B = \big[\begin{smallmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{smallmatrix}\big]. \end{align} $$

With respect to the orthonormal basis $\mathbf {e}_1 \otimes \mathbf {e}_1, \mathbf {e}_1 \otimes \mathbf {e}_2, \mathbf {e}_2 \otimes \mathbf {e}_1, \mathbf {e}_2 \otimes \mathbf {e}_2$ of $\mathcal {H}\otimes \mathcal {H}$ , we see that $\frac {1}{2} (A \otimes B+B \otimes A)$ has the matrix representation

$$ \begin{align*} \frac{1}{2}\small \begin{bmatrix} 2 a_{11} b_{11} & a_{11} b_{12} + b_{11} a_{12} & a_{12} b_{11} + b_{12} a_{11} & 2a_{12} b_{12} \\ a_{11} b_{21} + b_{11} a_{21} & a_{11} b_{22} + b_{11} a_{22} & a_{12} b_{21} + b_{12} a_{21} & a_{12} b_{22} + b_{12} a_{22} \\ a_{21} b_{11} + b_{21} a_{11} & a_{21} b_{12} + b_{21} a_{12} & a_{22} b_{11} + b_{22} a_{11} & a_{22} b_{12} + b_{22} a_{12} \\ 2a_{21} b_{21} & a_{21} b_{22} + b_{21} a_{22} & a_{22} b_{21} + b_{22} a_{21} & 2a_{22} b_{22} \end{bmatrix}. \end{align*} $$

With respect to the orthonormal basis $\mathbf {e}_1 \odot \mathbf {e}_1, \sqrt {2}(\mathbf {e}_1 \odot \mathbf {e}_2), \mathbf {e}_2 \odot \mathbf {e}_2$ of $\mathcal {H} \odot \mathcal {H}$ , the symmetric tensor product $A \odot B$ has the matrix representation

(4.4) $$ \begin{align}\small \begin{bmatrix} a_{11}b_{11} &\frac{a_{11} b_{12} + b_{11}a_{12}}{\sqrt{2}} & a_{12}b_{12} \\ \frac{a_{11} b_{21} + b_{11}a_{21}}{\sqrt{2}}& \frac{a_{11} b_{22} + b_{11}a_{22} + a_{12} b_{21} + b_{12}a_{21}}{2} & \frac{a_{12} b_{22} + b_{12}a_{22}}{\sqrt{2}} \\ a_{21}b_{21}& \frac{a_{21} b_{22} + b_{21}a_{22}}{\sqrt{2}} & a_{22}b_{22} \end{bmatrix}. \end{align} $$

Proposition 4.5 If $A_1, A_2, \ldots , A_n \in \mathcal {B} (\mathcal {H})$ have a common invariant subspace $\mathcal {V} \subseteq \mathcal {H}$ , then $\odot ^n \mathcal {V}$ is invariant for $A_1 \odot A_2 \odot \cdots \odot A_n$ .

Proof This follows from Proposition 3.2.

Proposition 4.6 Let $A_1,A_2,\ldots ,A_n \in \mathcal {B}(\mathcal {H})$ . Then $(A_1 \odot A_2 \odot \cdots \odot A_n)^* = A_1^* \odot A_2^*\odot \cdots \odot A_n^*$ and $(A_1 \wedge A_2 \wedge \cdots \wedge A_n)^* = A_1^* \wedge A_2^*\wedge \cdots \wedge A_n^*$ .

Proof Recall that $\mathcal {H}^{\odot n}$ and $\mathcal {H}^{\wedge n}$ are invariant under $\mathrm {S}_n(A_1,A_2,\ldots ,A_n)$ . Since $(A_1 \otimes A_2 \otimes \cdots \otimes A_n)^* = A_1^* \otimes A_2^* \otimes \cdots \otimes A_n^*$ , the result follows.

Remark 4.7 Observe that $(A \odot A^*)^* = A^* \odot A = A \odot A^*$ ; that is, $A \odot A^*$ is self-adjoint. For example, for the $2 \times 2$ matrix A in (4.3), the formula (4.4) gives

$$ \begin{align*} A \odot A^* = \begin{bmatrix} |a_{11}|^2 &\frac{a_{11} \overline{a_{21}} + \overline{a_{11}}a_{12}}{\sqrt{2}} & a_{12}\overline{a_{21}} \\ \frac{a_{11} \overline{a_{12}} + \overline{a_{11}}a_{21}}{\sqrt{2}}& \frac{a_{11} \overline{a_{22}} + \overline{a_{11}}a_{22} + |a_{12}|^2 + |a_{21}|^2}{2} & \frac{a_{12} \overline{a_{22}} + \overline{a_{21}}a_{22}}{\sqrt{2}} \\ a_{21}\overline{a_{12}}& \frac{a_{21} \overline{a_{22}} + \overline{a_{12}}a_{22}}{\sqrt{2}} & |a_{22}|^2 \end{bmatrix}. \end{align*} $$

Theorem 4.8 Let $A_1,A_2,\ldots ,A_n\in \mathcal {B}(\mathcal {H})$ .

  1. (a) If $A_1,A_2,\ldots ,A_n$ are self-adjoint, then $A_1 \odot A_2 \odot \cdots \odot A_n$ is self-adjoint.

  2. (b) If $A_1,A_2,\ldots ,A_n$ are normal and commute, then $A_1 \odot A_2 \odot \cdots \odot A_n$ is normal.

  3. (c) If $U \in \mathcal {B}(\mathcal {H})$ is unitary, then $U \odot U \odot \cdots \odot U$ is unitary.

Proof (a) This follows from Proposition 4.6.

(b) The Fuglede–Putnam theorem [Reference Putnam25] ensures that $A_i A_j^* = A_j^* A_i$ for $1\leqslant i,j\leqslant n$ . Proposition 4.6 and a computation establish the normality of $A_1 \odot A_2 \odot \cdots \odot A_n$ .

(c) By Proposition 4.6, $(U \odot U \odot \cdots \odot U)^* (U \odot U \odot \cdots \odot U) = I \odot I \odot \cdots \odot I$ and similarly $(U \odot U \odot \cdots \odot U) (U \odot U \odot \cdots \odot U)^* = I \odot I \odot \cdots \odot I$ .

Example 4.9 The normal matrices $A = \scriptsize \begin {bmatrix} 1 & i \\ i & 1 \end {bmatrix} $ and $B = \scriptsize \begin {bmatrix} 1 & -1 \\ 1 & 1 \end {bmatrix} $ do not commute, but $ A \odot B = \Bigg [ \begin {smallmatrix} 1 & -\frac {1-i}{\sqrt {2}} & -i \\ \frac {1+i}{\sqrt {2}} & 1 & -\frac {1-i}{\sqrt {2}} \\ i & \frac {1+i}{\sqrt {2}} & 1 \\ \end {smallmatrix}\Bigg ]$ is not normal. Thus, the commutativity hypothesis is necessary in (b).

Example 4.10 If $A,B \in \mathcal {B}(\mathcal {H})$ are self-adjoint and noncommuting, then $A \odot B$ is self-adjoint, and hence normal. Thus, the converse of (b) is false.

Example 4.11 The matrices $A = \scriptsize \begin {bmatrix} 1 & 0 \\ 0 & 1 \end {bmatrix} $ and $B = \scriptsize \begin {bmatrix} 0 & -1 \\ 1 & 0 \end {bmatrix} $ are unitary, but $A\odot B= \Bigg [\begin {smallmatrix} 0 & -1/\sqrt {2} &0\\ 1/\sqrt {2} & 0 & -1/\sqrt {2}\\ 0 & 1/\sqrt {2} &0 \end {smallmatrix}\Bigg ]$ is not.

Proposition 4.12 For orthogonal projections $P, Q$ , where $P,Q \neq 0,I$ and $PQ = QP = 0$ , the map $2P\odot Q$ is an orthogonal projection. Furthermore, $2P\odot Q \neq I \odot I , 0$ .

Proof Since P and Q are self-adjoint, $2 \mathrm {S}_2 ({\kern-1pt}P,Q)$ is self-adjoint. Since $P Q {\kern-1.5pt}={\kern-1.5pt} QP {\kern-1.5pt}={\kern-1.5pt} 0$ , we have $(2 \mathrm {S}_2 (P,Q))^2 = 2 \mathrm {S}_2 (P,Q)$ . Thus, $2 \mathrm {S}_2 (P,Q)$ is an orthogonal projection, so $2 \mathrm {S}_2 (P,Q)|_{\mathcal {H} \odot \mathcal {H}} = 2P\odot Q$ is an orthogonal projection. To show $2P\odot Q \neq I \odot I , 0$ observe if $\mathbf {x},\mathbf {y} \in \mathcal {H}$ are nonzero, $P\mathbf {x} = \mathbf {x}$ , and $Q\mathbf {y} =\mathbf {y}$ , then $(2P\odot Q )(\mathbf {x} \odot \mathbf {y}) = \mathbf {x} \odot \mathbf {y}$ and $(2P\odot Q) (\mathbf {x} \odot \mathbf {x}) = 0$ .

One can define tensor powers of bounded conjugate-linear operators. An analogue of Proposition 3.2 shows that $\mathcal {H}^{\odot n}$ is invariant under $\mathrm {S}_n(C_1,C_2,\ldots , C_n)$ for any bounded conjugate-linear operators $C_1, C_2, \ldots , C_n $ . We say that C is a conjugation on $\mathcal {H}$ if C is conjugate linear, isometric, and involutive. We say that $T \in B(\mathcal {H})$ is C-symmetric if $T = CT^*C$ [Reference Garcia, Prodan and Putinar13Reference Garcia and Putinar15]. If C is a conjugation, let $C^{\odot n}$ denote the restriction of $C^{\otimes n}$ to $\mathcal {H}^{\odot n}$ .

Proposition 4.13 Let C be a conjugation on $\mathcal {H}$ , and let $A_1, A_2, \ldots , A_n \in \mathcal {B}(\mathcal {H})$ be C-symmetric. (a) $A_1 \otimes A_2 \otimes \cdots \otimes A_n $ is $C^{\otimes n}$ -symmetric. (b) $A_1 \odot A_2 \odot \cdots \odot A_n $ is $C^{\odot n}$ -symmetric.

Proof (a) Since $(A_1 \otimes A_2 \otimes \cdots \otimes A_n)^* = A_1^* \otimes A_2^* \otimes \cdots \otimes A_n^*$ , the result follows.

(b) Since $C^{\otimes n} (\mathcal {H}^{\odot n}) \subseteq \mathcal {H}^{\odot n}$ , it follows that $C^{\odot n}$ is a well-defined conjugation on $\mathcal {H}^{\odot n}$ . The desired result follows from part (a) and Proposition 4.6.

5 Norms and spectral radius

In this section, we provide various bounds for the norm of symmetric tensor products of operators, as well as a spectral-radius formula for symmetric tensor powers. It may be that (a) and (b) of Theorem 5.1 are already known, although we did not encounter them before.

Theorem 5.1 Let $A,B \in \mathcal {B}(\mathcal {H})$ .

  1. (a) $\frac {1}{\sqrt {2}}\sup _{ \mathbf {x} \in \mathcal {H}, \|\mathbf {x}\| = 1} { \|A\mathbf {x}\| \|B\mathbf {x}\|} \leqslant \| A \odot B \| $ , and this is sharp.

  2. (b) If $A,B \neq 0$ , then $A \odot B \neq 0$ .

  3. (c) $\rho ( A^{\odot n} ) = \rho (A)^n$ , in which $\rho (A) := \sup \{ |\lambda | \in \sigma (A) \}$ is the spectral radius

Proof (a) If $ \|\mathbf {x}\| = 1$ , then $\mathbf {x} \otimes \mathbf {x} \in \mathcal {H} \odot \mathcal {H}$ has norm one, so Lemma 2.15 ensures that

$$ \begin{align*} \frac{ \|A\mathbf{x}\| \|B\mathbf{x}\|}{\sqrt{2}} \leqslant \| A\mathbf{x} \odot B\mathbf{x} \| = \left\|\frac{(A \otimes B + B \otimes A) (\mathbf{x} \otimes \mathbf{x})}{2} \right\| \leqslant \| A \odot B \|. \end{align*} $$

Equality is attained for $A = \big [\begin {smallmatrix} 1 & 0 \\ 0 & 0 \\ \end {smallmatrix}\big ]$ , $B = \big [\begin {smallmatrix} 0 & 0 \\ 1 & 0 \end {smallmatrix}\big ]$ , and $\mathbf {x} = \big [ \begin {smallmatrix} 1 \\ 0 \end {smallmatrix} \big ]$ . Indeed, (4.4) ensures that

$$ \begin{align*} A \odot B = \bigg[ \begin{smallmatrix} 0 & 0 & 0 \\ \frac{1}{\sqrt{2}} & 0 & 0 \\ 0 & 0 & 0 \end{smallmatrix} \bigg], \quad \text{and hence} \quad \| A \odot B \| = \frac{1}{\sqrt{2}}, \end{align*} $$

while $\mathbf {x}$ is of unit norm and $\|A\mathbf {x}\| = \| B\mathbf {x} \| = 1$ , so $\frac { \|A\mathbf {x}\| \|B\mathbf {x}\|}{\sqrt {2}} = \frac {1}{\sqrt {2}} = \| A \odot B \| $ .

(b) Let $A,B \neq 0$ . If there is a unit vector $\mathbf {x}$ such that $A \mathbf {x} \neq \mathbf {0}$ and $B \mathbf {x} \neq \mathbf {0}$ , then (a) ensures that $0< \frac {1}{\sqrt {2}} \| A\mathbf {x} \| \| B \mathbf {x} \| \leqslant \| A \odot B\|$ . So suppose that $A\mathbf {x} = \mathbf {0}$ or $B\mathbf {x} = \mathbf {0}$ for all $\mathbf {x}\in \mathcal {H}$ . Pick $\mathbf {u}$ such that $ A \mathbf {u} \neq \mathbf {0}$ and $\mathbf {v} $ such that $ B \mathbf {v} \neq \mathbf {0}$ . Then $B \mathbf {u} =\mathbf {0}$ and $A \mathbf {v} =\mathbf {0}$ ; moreover, $\mathbf {u}\neq -\mathbf {v}$ . Let $\mathbf {x} = \frac {\mathbf {u} + \mathbf {v}}{ \| \mathbf {u} + \mathbf {v} \| }$ , then (a) leads to the contradiction

$$ \begin{align*} 0 < \frac{1}{\sqrt{2}} \frac{ \|A \mathbf{u} \| \| B \mathbf{v} \| }{\| \mathbf{u} + \mathbf{v} \|^2 } = \frac{1}{\sqrt{2}} \| A\mathbf{x} \| \| B\mathbf{x} \| \leqslant \| A \odot B \|. \end{align*} $$

In both cases, $A \odot B\neq 0 $ since it has positive norm.

(c) Since $ (A^{\odot n})^k = (A^k)^{\odot n}$ for each $k \in \mathbb {N}$ , Proposition 3.4 ensures that $ \| (A^{\odot n})^k \| = \|(A^k)^{\odot n} \| = \|A^k\|^n$ . Gelfand’s formula [Reference Conway7, Proposition 3.8, Chapter 5] yields

$$ \begin{align*}\rho ( A^{\odot n}) = \inf_{k \in \mathbb{N}} \| (A^{\odot n})^k \|^{\frac{1}{k}} = \inf_{k \in \mathbb{N}} \|A^k\|^{\frac{n}{k}} = \rho(A)^n.\\[-42pt] \end{align*} $$

In contrast to symmetric tensor products, the antisymmetric products of nonzero operators may be $0$ . If P is a rank-one orthogonal projection, then $P^{\wedge n} = 0$ for $n\geqslant 2$ .

Theorem 5.2

  1. (a) If $A_1, A_2, \ldots , A_n \in \mathcal {B} ( \mathcal {H}) $ and the $A_i$ have orthogonal ranges, then $\| A_1 \odot A_2 \odot \cdots \odot A_n \| \leqslant \frac {1}{\sqrt {n!}}\| A_1 \|\| A_2 \| \cdots \| A_n \|$ . For $n=2$ , the inequality is sharp.

  2. (b) If $(\ker B)^\perp \subseteq \ker A$ and $\operatorname {ran} B \subseteq (\operatorname {ran} A)^\perp $ , then $\frac {1}{2}\|A\|\|B\| \leqslant \|A \odot B\| \leqslant \frac {1}{\sqrt {2}}\| A \|\| B \|$ . The inequalities are sharp.

Proof (a) Recall that the set of finite sums $\sum _{i=1}^k \mathbf {v}^1_i\otimes \mathbf {v}^2_i \otimes \cdots \otimes \mathbf {v}^n_i$ of simple tensors are dense in $\mathcal {H}^{\otimes n}$ . Take the supremum over such vectors and observe that

$$ \begin{align*} &\Big\|\sum_{\pi\in \Sigma_n} A_{\pi(1)} \otimes A_{\pi(2)} \otimes \dots \otimes A_{\pi(n)}\Big\|\\ &\qquad= \sup \frac{\| (\sum_{\pi\in \Sigma_n} A_{\pi(1)} \otimes A_{\pi(2)} \otimes \dots \otimes A_{\pi(n)})(\sum_{i=1}^k \mathbf{v}^1_i\otimes \mathbf{v}^2_i \otimes \cdots \otimes \mathbf{v}^n_i) \|}{\| \sum_{i=1}^k \mathbf{v}^1_i\otimes \mathbf{v}^2_i \otimes \cdots \otimes \mathbf{v}^n_i \|} \\ &\qquad=\sup \frac{\| (\sum_{\pi\in \Sigma_n} \sum_{i=1}^k (A_{\pi(1)}\mathbf{v}_i^1 \otimes A_{\pi(2)}\mathbf{ v}_i^2 \otimes \dots \otimes A_{\pi(n)}\mathbf{v}_i^n) \|}{\| \sum_{i=1}^k \mathbf{ v}^1_i\otimes \mathbf{v}^2_i \otimes \cdots \otimes \mathbf{v}^n_i \|} \\ &\qquad=\sup \frac{(\sum_{\pi\in \Sigma_n} \| \sum_{i=1}^k A_{\pi(1)}\mathbf{v}_i^1 \otimes A_{\pi(2)}\mathbf{ v}_i^2 \otimes \dots \otimes A_{\pi(n)}\mathbf{v}_i^n \|^2)^{1/2}}{\| \sum_{i=1}^k \mathbf{ v}^1_i\otimes \mathbf{v}^2_i \otimes \cdots \otimes \mathbf{v}^n_i \|} \\ &\qquad= \sup_{\mathbf{x} \in \mathcal{H}^{\otimes n} } \frac{\left( \sum_{\pi \in \Sigma_n }\| (A_{\pi(1)} \otimes A_{\pi(2)} \otimes \cdots \otimes A_{\pi(n)} )\mathbf{x} \|^2\right)^{1/2}}{\| \mathbf{x} \|} \\ &\qquad \leqslant \sqrt{n!}\| A_1 \|\| A_2 \|\cdots \| A_n \|. \end{align*} $$

The prepenultimate equality above follows because the $A_i$ have orthogonal ranges; the final inequality is due to (3.1). For $n=2$ , the matrices from the proof of Theorem 5.1(a) have orthogonal ranges and demonstrate that the inequality is sharp.

(b) Part (a) ensures that the desired upper inequality holds and is sharp. It suffices to examine the lower inequality. For $\mathbf {u}\in (\ker A)^\perp $ and $\mathbf {v}\in \ker A$ ,

$$ \begin{align*} \| (A\otimes B+B\otimes A)(\mathbf{u}\otimes \mathbf{v}+\mathbf{v}\otimes \mathbf{u}) \| &= \| A\mathbf{u}\otimes B\mathbf{v} + B\mathbf{v}\otimes A\mathbf{u} \|\\ &= (\| A\mathbf{u}\otimes B\mathbf{v} \|^2 + \| B\mathbf{v}\otimes A\mathbf{u} \|^2)^{1/2} \end{align*} $$

since $(A\mathbf {u} \otimes B\mathbf {v}) \perp (B\mathbf {v}\otimes A\mathbf {u} )$ . Then

$$ \begin{align*} \| A\odot B \| &\geqslant \sup_{\substack{\mathbf{u}\in \ker A^{\perp} \\ \mathbf{v} \in \ker A }}\frac{\| (A\odot B)(\mathbf{ u}\otimes \mathbf{v}+\mathbf{v}\otimes \mathbf{u}) \|}{\| \mathbf{u}\otimes \mathbf{v}+\mathbf{v}\otimes \mathbf{ u} \|}\\ &= \frac{1}{2}\sup_{\substack{\mathbf{u}\in \ker A^{\perp} \\ \mathbf{v} \in \ker A}}\frac{(\| A\mathbf{u}\otimes B\mathbf{v} \|^2+\| B\mathbf{v}\otimes A\mathbf{u} \|^2)^{1/2}}{\sqrt{2}\| \mathbf{v} \|\| \mathbf{u} \|}\\ &= \frac{1}{2} \sup_{\substack{\mathbf{u}\in \ker A^{\perp} \\ \mathbf{v} \in \ker A }} \frac{ \|A\mathbf{u} \| \| B\mathbf{v} \|}{\| \mathbf{u} \|\| \mathbf{v} \|} =\frac{1}{{2}}\| A \|\| B \| \end{align*} $$

since $\| A \|=\sup _{\mathbf {u}\in (\ker A)^\perp }\frac {\| A\mathbf {u} \|}{\| \mathbf {u} \|}$ and $(\ker B)^\perp \subseteq \ker A$ .

To see that the lower inequality is sharp, let $A = \big [ \begin {smallmatrix} 1 & 0 \\ 0 & 0 \end {smallmatrix}\big ]$ and $ B = I - A$ , so $(\ker B)^\perp \subseteq \ker A$ and $\operatorname {ran} B \subseteq (\operatorname {ran} A)^\perp $ . Then Proposition 4.12 yields $\| A \odot B \| = \frac {1}{2}$ .

6 Spectrum

Here, we present results on the spectrum of symmetric products of Hilbert-space operators (the finite-dimensional case is simpler; see [Reference Bhatia1, p. 18]). We find a complete description in some special cases. In what follows, $\sigma (A)$ , $\sigma _{\mathrm {p}}(A)$ , and $\sigma _{\mathrm {ap}}(A)$ denote the spectrum, point spectrum, and approximate point spectrum of A, respectively [Reference Garcia, Mashreghi and Ross12, Definition 2.4.5]. For $X, Y \subseteq \mathbb {C}$ , let $X + Y := \{ x+y : x \in X, \, y \in Y \}$ and $XY := \{ xy : x \in X, \, y \in Y \}$ .

Theorem 6.1 (Brown–Pearcy [Reference Brown and Pearcy2])

$\sigma (A\otimes B)=\sigma (A)\sigma (B)$ for all $A,B \in \mathcal {B}(\mathcal {H})$ .

Proposition 6.2 Let $A,B \in \mathcal {B}(\mathcal {H})$ .

  1. (a) $\sigma (\tfrac {1}{2}(A \otimes B + B \otimes A) ) = \sigma (A \odot B) \cup \sigma (A \wedge B)$ .

  2. (b) $\sigma _{\mathrm {p}} (\tfrac {1}{2}(A \otimes B + B \otimes A) ) = \sigma _{\mathrm {p}} (A \odot B) \cup \sigma _{\mathrm {p}} (A \wedge B)$ .

Proof This follows from the direct-sum decomposition (3.6).

Theorem 6.3 Let $A \in \mathcal {B}(\mathcal {H})$ .

  1. (a) $\sigma (A\odot I) = \frac {1}{2}(\sigma (A)+\sigma (A))$ .

  2. (b) $\sigma (A\odot A) = \sigma (A)\sigma (A)$ .

Proof (a) First, observe that

$$ \begin{align*} \sigma(A\odot I) &\subseteq \sigma\big(\tfrac{1}{2} (A\otimes I+I \otimes A) \big) && (\text{Lemma 6.2})\\ &= \tfrac{1}{2}(\sigma(A)+\sigma(A)) && (\text{by [29, Theorem 2.1]}). \end{align*} $$

Let $\lambda ,\mu \in \sigma _{\mathrm {ap}}(A)$ . There are sequences $\{\mathbf {u}_i\}_{i=1}^{\infty }$ and $\{\mathbf {v}_i\}_{i=1}^{\infty }$ of unit vectors such that $\| (A-\lambda I)\mathbf {u}_i \|\to 0$ and $\| (A-\mu I)\mathbf {v}_i \|\to 0$ . Then Lemma 2.15 ensures that

$$ \begin{align*} &\bigg\|\bigg(A\odot I- \Big(\frac{\lambda}{2}+\frac{\mu}{2}\Big)(I\odot I)\bigg)\bigg(\frac{\mathbf{u}_i\odot \mathbf{v}_i}{\|\mathbf{u}_i\odot \mathbf{v}_i \|}\bigg)\bigg\|\\&\quad \leqslant \sqrt{2}\bigg\|\bigg(A\odot I- \Big(\frac{\lambda}{2}+\frac{\mu}{2}\Big)(I\odot I)\bigg) (\mathbf{ u}_i\odot \mathbf{v}_i) \bigg\| \\&\quad = \frac{1}{2\sqrt{2}}\big\| \big(A\otimes I+I \otimes A-(\lambda+\mu)(I\otimes I)\big)(\mathbf{u}_i\otimes \mathbf{v}_i+\mathbf{v}_i\otimes \mathbf{u}_i) \big\|\\&\quad = \frac{1}{2\sqrt{2}} \big\|A\mathbf{u}_i \otimes \mathbf{v}_i + \mathbf{u}_i \otimes A \mathbf{v}_i - \lambda \mathbf{u}_i \otimes \mathbf{v}_i - \mathbf{u}_i \otimes \mu\mathbf{v}_i \\&\qquad + A \mathbf{v}_i \otimes \mathbf{u}_i + \mathbf{v}_i \otimes A \mathbf{u}_i - \mathbf{v}_i \otimes \lambda\mathbf{u}_i - \mu \mathbf{v}_i \otimes \mathbf{u}_i \big\| \\&\quad= \frac{1}{2\sqrt{2}} \big\| (A-\lambda I)\mathbf{u}_i \otimes \mathbf{v}_i + \mathbf{v}_i \otimes (A-\lambda I)\mathbf{u}_i \\&\qquad + \mathbf{u}_i\otimes (A-\mu I)\mathbf{v}_i+ (A-\mu I) (\mathbf{v}_i \otimes \mathbf{u}_i) \big\| \\&\quad \leqslant\frac{1}{2\sqrt{2}} \big( 2\| (A-\lambda I)\mathbf{u}_i \|\| \mathbf{v}_i \| + 2 \| (A-\mu I)\mathbf{v}_i \| \| \mathbf{u}_i \| \big)\\&\quad = \frac{1}{\sqrt{2}} \| (A-\lambda I)\mathbf{u}_i \| + \frac{1}{\sqrt{2}}\| (A-\mu I)\mathbf{v}_i \| \to 0. \end{align*} $$

Thus, $\frac {1}{2}(\lambda +\mu )\in \sigma _{\mathrm {ap}}(A\odot I)$ and hence

(6.4) $$ \begin{align} \sigma_{\mathrm{ap}}(A\odot I) \supseteq \tfrac{1}{2}(\sigma_{\mathrm{ap}}(A)+\sigma_{\mathrm{ap}}(A)). \end{align} $$

Recall that $\Omega (A)=\sigma (A)\setminus \sigma _{\mathrm {ap}}(A)$ is a bounded open set. Furthermore, [Reference Brown and Pearcy2, p. 164] shows that $\lambda \in \Omega (A)$ implies $\overline {\lambda }\in \sigma _{\mathrm {p}}(A^*)$ . Since $\sigma (A)$ is closed and $\Omega (A) \subseteq \sigma (A)$ , the boundary of $\Omega (A)$ is contained in $ \sigma (A) \setminus \Omega (A) = \sigma _{\mathrm {ap}}(A)$ .

Let $\lambda ,\mu \in \sigma (A)$ . Following [Reference Brown and Pearcy2, Proof 2], we examine four special cases.

  1. (i) If $\lambda ,\mu \in \sigma _{\mathrm {ap}}(A)$ , then (6.4) ensures that $\frac {1}{2} (\lambda +\mu ) \in \sigma _{\mathrm {ap}}(A\odot I)\subseteq \sigma (A\odot I)$ .

  2. (ii) If $\lambda ,\mu \in \Omega (A)$ , then $\overline {\lambda },\overline {\mu } \in \sigma _{\mathrm {p}}(A^*)\subseteq \sigma _{\mathrm {ap}}(A^*)$ . Then (i) ensures that

    $$ \begin{align*} \tfrac{1}{2}(\overline{\lambda+\mu})\in \sigma_{\mathrm{ap}}(A^* \odot I) =\sigma_{\mathrm{ap}}((A\odot I)^*)\subseteq \sigma ((A\odot I)^*), \end{align*} $$
    so $\frac {1}{2} (\lambda +\mu ) \in \sigma (A\odot I)$ .
  3. (iii) Suppose that $\lambda \in \sigma _{\mathrm {ap}}(A)$ and $\mu \in \Omega (A)$ . Then $\overline {\lambda } \in \sigma (A^*)$ and $\overline {\mu }\in \sigma _{\mathrm {ap}}(A^*)$ . If $\overline {\lambda }\in \sigma _{\mathrm {ap}}(A^*)$ , then (a) ensures that

    $$ \begin{align*} \tfrac{1}{2}(\overline{\lambda+\mu})\in \sigma_{\mathrm{ap}}(A^*\odot I)\subseteq \sigma((A\odot I)^*), \quad \text{so } \tfrac{1}{2}(\lambda+\mu) \in \sigma(A\odot I). \end{align*} $$
    Suppose instead that $\overline {\lambda } \in \Omega (A^*)$ . The openness of $\Omega (A)$ and $\Omega (A^*)$ provide $\tau>0$ such that $\overline {\lambda }-t \in \Omega (A^*)$ and $\mu +t \in \Omega (A)$ for $0\leqslant t <\tau $ .
    • If $\overline {\lambda }-\tau \in \Omega (A^*)$ and $\mu +\tau \in \sigma _{\mathrm {ap}}(A)$ , then $\lambda -\tau =\overline {\overline {\lambda }-\tau }\in \sigma _{\mathrm {ap}}(A)$ . Then (a) ensures that

      $$ \begin{align*} \tfrac{1}{2}(\lambda+\mu) = \tfrac{1}{2}(\lambda-\tau +\mu+\tau )\in \sigma_{\mathrm{ap}}(A\odot I)\subseteq \sigma (A\odot I). \end{align*} $$
    • If $\overline {\lambda }-\tau \in \sigma _{\mathrm {ap}}(A^*)$ and $\mu +\tau \in \Omega (A)$ , this case is analogous to the previous one.

    • Suppose that $\overline {\lambda }-\tau \in \sigma _{\mathrm {ap}}(A^*)$ and $\mu +\tau \in \sigma _{\mathrm {ap}}(A)$ . If $t_n \to \tau $ and $0 <t_n <\tau $ , then $\overline {\lambda }-t_n \in \Omega (A^*)$ and hence $\lambda -t_n \in \sigma _{\mathrm {ap}}(A)$ . Thus,

      $$ \begin{align*} \tfrac{1}{2}(\lambda-t_n+\mu+\tau )\in \sigma(A\odot I). \end{align*} $$
      Since $\sigma (A \odot I)$ is closed, $\frac {1}{2}(\lambda +\mu ) \in \sigma (A\odot I)$ .
  4. (iv) The case $\lambda \in \Omega (A)$ and $\mu \in \sigma _{\mathrm {ap}}(A)$ is analogous to (iii).

In all cases $\frac {1}{2}(\lambda +\mu )\in \sigma (A \odot I)$ , so $\sigma (A\odot I)=\frac {1}{2}(\sigma (A)+\sigma (A))$ .

(b) The proof is similar to that of (a), so we sketch the details. Lemma 6.2 and Theorem 6.1 yield $\sigma ( A \odot A ) \subseteq \sigma \left ( A \otimes A \right ) = \sigma (A) \sigma (A)$ . If $\lambda ,\mu \in \sigma _{\mathrm {ap}}(A)$ , then an argument similar to that of (a) ensures that $\lambda \mu \in \sigma _{\mathrm {ap}}(A\odot A)$ . As above, we can follow [Reference Brown and Pearcy2, Proof 2] and use a case-by-case analysis to show that $\lambda \mu \in \sigma (A \odot A)$ , so that $\sigma (A)\sigma (A) \subseteq \sigma (A \odot A)$ .

7 Diagonal operators

Since diagonal operators are among the most elementary operators one encounters in the infinite-dimensional setting [Reference Garcia, Mashreghi and Ross12, Chapter 2], it makes sense to consider their symmetric tensor products. Let $\mathbf {e}_1,\mathbf {e}_2,\ldots $ be an orthonormal basis for $\mathcal {H}$ and suppose that $L,M \in \mathcal {B}(\mathcal {H})$ satisfy $L\mathbf {e}_i = \lambda _i \mathbf {e}_i$ and $M\mathbf {e}_i = \mu _i \mathbf {e}_i$ for $i \geqslant 1$ . For $i,j\geqslant 1$ ,

(7.1) $$ \begin{align} &(L \odot M)(\mathbf{e}_i \odot \mathbf{e}_j) = \tfrac{1}{4}(L \otimes M + M \otimes L)( \mathbf{e}_i \otimes \mathbf{e}_j + \mathbf{e}_j \otimes \mathbf{ e}_i)\nonumber\\ &\qquad= \tfrac{1}{4}(L \mathbf{e}_i \otimes M \mathbf{e}_j + M\mathbf{e}_i \otimes L\mathbf{e}_j+ L\mathbf{ e}_j \otimes M\mathbf{e}_i + M\mathbf{e}_j \otimes L\mathbf{e}_i)\nonumber\\ &\qquad= \tfrac{1}{4}(\lambda_i\mathbf{e}_i \otimes \mu_j \mathbf{e}_j+\mu_i \mathbf{e}_i \otimes \lambda_j\mathbf{e}_j+ \lambda_j\mathbf{e}_j \otimes \mu_i\mathbf{e}_i + \mu_j\mathbf{e}_j \otimes \lambda_i\mathbf{e}_i)\nonumber\\ &\qquad= \tfrac{1}{2}(\lambda_i\mu_j+\lambda_j\mu_i) ( \mathbf{e}_i \odot \mathbf{e}_j ). \end{align} $$

Thus, $L \odot M$ is a diagonal operator with

$$ \begin{align*} \sigma_{\mathrm{p}}(L \odot M) = \left\{\tfrac{1}{2}(\lambda_i\mu_j+\lambda_j\mu_i ): i,j \geqslant 1 \right\} \quad \text{and} \quad \sigma(L \odot M) = \sigma_{\mathrm{p}}(L \odot M)^-. \end{align*} $$

For symmetric products of diagonal operators, we can improve upon Theorem 5.1(a).

Proposition 7.2 Let $L,M$ be diagonal operators as above. Then $ \|L\| \| M \| (\sqrt {2}-1) \leqslant \| L \odot M \| \leqslant \|L\| \|M\|$ and these inequalities are sharp.

Proof The computation (7.1) shows that it suffices to prove the result for $2 \times 2$ diagonal matrices. Let $L = \operatorname {diag}(\lambda _1,\lambda _2)$ and $M = \operatorname {diag}(\mu _1,\mu _2)$ . By linearity we may assume $\| L \|=\max \{ |\lambda _1|,|\lambda _2| \}=1$ and $\| M \|=\max \{ |\mu _1|,|\mu _2|\}=1$ , Consider $L \odot M$ , which by (4.4) we identify with $\operatorname {diag}( \lambda _1 \mu _1 , \frac {\lambda _1 \mu _2 + \lambda _2 \mu _1}{2}, \lambda _2 \mu _2)$ . Then

(7.3) $$ \begin{align} \| L \odot M \|=\max\big\{ | \lambda_1 \mu_1|,\tfrac{1}{2}|\lambda_1 \mu_2 + \lambda_2 \mu_1|,|\lambda_2 \mu_2|\big\} \end{align} $$

and one of the following holds:

  1. (a) $|\lambda _1|=|\mu _1|=1$ or $|\lambda _2|=|\mu _2|=1$ .

  2. (b) $|\lambda _1|=|\mu _2|=1$ or $|\lambda _2|=|\mu _1|=1$ .

If (a) holds, then $\| L \odot M \| \geqslant \max \{ |\lambda _1\mu _1|,|\lambda _2\mu _2| \}=1$ . If (b) holds, then without loss of generality assume that $|\lambda _1|=|\mu _2|=1$ . From (7.3),

$$ \begin{align*} \| L \odot M \| &\geqslant \inf_{|\mu_1|, |\lambda_2| \leqslant 1} \max \big\{|\mu_1|, \tfrac{1}{2}|1+\lambda_2 \mu_1|,|\lambda_2|\big\} \\ &\geqslant \inf_{0 \leqslant s\leqslant 1} \max \{s, \tfrac{1}{2}(1- s^2)\} = \sqrt{2} - 1. \end{align*} $$

The lower bound is attained by $L= \big [\begin {smallmatrix} 1 & 0 \\ 0 & \sqrt {2}-1\end {smallmatrix}\big ]$ and $M= \big [ \begin {smallmatrix}-\sqrt {2}+1& 0 \\ 0&1\end {smallmatrix}\big ]$ . The upper bound is attained by $L=M=I$ .

Proposition 7.4

  1. (a) There exists a self-adjoint diagonal operator D such that $\sigma (D)$ has measure zero in $\mathbb {R}$ and $\sigma (D\odot D)$ has positive measure in $\mathbb {R}$ .

  2. (b) There exists a diagonal operator D such that $\sigma (D)$ has planar Lebesgue measure zero and $\sigma (D\odot D)$ has positive planar Lebesgue measure.

Proof (a) Let $\mathscr {C}$ denote the Cantor set, which has measure zero. The exponential function is differentiable, so by [Reference Rudin28, Lemma 7.25] $\{ e^\mu : \mu \in \mathscr {C}\cap \mathbb {Q} \}^- = \{e^c : c \in \mathscr {C}\}$ has measure zero in $\mathbb {R}$ . Let D be a diagonal operator with point spectrum $\{ e^{\lambda } : \lambda \in \mathscr {C} \cap \mathbb {Q}\}$ , so that $\sigma (D) = \{e^c : c \in \mathscr {C}\}$ has measure zero. Since $\mathscr {C}+\mathscr {C}=[0,2]$ , Theorem 6.3(b) ensures that $\sigma (D \odot D) = \sigma (D)\sigma (D) = \{ e^{c+d} : c,d \in \mathscr {C}\} = [1,e^2]$ has positive measure in $\mathbb {R}$ .

(b) Let D be a diagonal operator with point spectrum $\{ e^{\lambda + i \mu } : \lambda \in \mathscr {C} \cap \mathbb {Q}, \mu \in \mathbb {Q} \cap [0,2\pi )\}$ . Then $\sigma (D)$ has planar measure zero, but an argument similar to that in (a) ensures that $\sigma (D\odot D)$ is the annulus centered at $0$ with radii $1$ and $e^2$ , which has positive measure.

Below the brackets $\{\hspace {-3.5pt}\{$ and $\}\hspace {-3.5pt}\}$ indicate a multiset; that is, a set that permits multiplicity.

Proposition 7.5 Let $A_1, A_2, \ldots , A_n \in \mathcal {B} (\mathcal {H})$ be commuting diagonal operators with $\sigma _{\mathrm {p}}(A_i) = \{\hspace {-3.5pt}\{ \lambda _1^{(i)}, \lambda _2^{(i)},\ldots \}\hspace {-3.5pt}\}$ allowing for repetition. Then

$$ \begin{align*}\small \sigma_{\mathrm {p}}(A_1 \odot A_2 \odot \cdots \odot A_n) = \bigg\{\!\!\bigg\{\frac{1}{n!} \sum_{\pi \in \Sigma_n }\lambda_{i_1}^{(\pi(1))} \lambda_{i_2}^{(\pi(2))}\cdots \lambda_{i_n}^{(\pi(n))} : i_1 \leqslant i_2 \leqslant \cdots \leqslant i_n \bigg\}\!\!\bigg\} \end{align*} $$

and $\sigma (A_1 \odot A_2 \odot \cdots \odot A_n) = \sigma _{\mathrm {p}}(A_1 \odot A_2 \odot \cdots \odot A_n)^-$ .

8 The shift operator and its adjoint

In this section, we find the spectrum of the symmetric tensor product of the unilateral shift and its adjoint. Let $(Sf)(z) = zf(z)$ denote the unilateral shift on $H^2(\mathbb {D})$ [Reference Garcia, Mashreghi and Ross12, Chapter 5]. Its adjoint is the backward shift $(S^*f)(z) = (f(z)-f(0))/z$ .

Theorem 8.1 The self-adjoint operators $S \odot S^*$ and $S \wedge S^*$ satisfy

$$ \begin{align*} \sigma_{\mathrm{p}}(S \odot S^*) =\big\{\hspace{-3.5pt}\big\{ \cos\! \big( \tfrac{(2j-1) \pi} { k+2} \big) : k \geqslant 0 \text{ and } 1\leqslant j \leqslant \lfloor \tfrac{k+2}{2} \rfloor \big\}\hspace{-3.5pt}\big\} \end{align*} $$

and

$$ \begin{align*} \sigma_{\mathrm{p}}(S \wedge S^*) =\big\{\hspace{-3.5pt}\big\{ \cos\! \big( \tfrac{2j \pi} { k+2} \big) : k \geqslant 1 \text{ and } 1\leqslant j \leqslant \lfloor \tfrac{k+1}{2} \rfloor \big\}\hspace{-3.5pt}\big\} , \end{align*} $$

with the eigenvalues in these multisets repeated by multiplicity. Moreover, $\sigma (S \odot S^*) = \sigma _{\mathrm {ap}} (S \odot S^*) = \sigma (S \wedge S^*) = \sigma _{\mathrm {ap}} (S \wedge S^*) = [-1,1]$ and $\| S \odot S^* \| = \| S \wedge S^* \|=1$ .

Proof Identify $H^2(\mathbb {D}) \otimes H^2(\mathbb {D})$ with $H^2(\mathbb {D}^2)$ as in Example 2.5 and consider

$$ \begin{align*} T = \frac{1}{2}(S\otimes S^*+S^* \otimes S)(z^i w^j) = \begin{cases} \frac{1}{2}(z^{i+1} w^{j-1}+z^{i-1} w^{j+1}), & \text{if } i,j \geqslant 1,\\[2pt] \frac{1}{2}z^{i+1} w^{j-1}, & \text{if } i = 0 \text{ and } j \geqslant 1,\\[2pt] \frac{1}{2}z^{i-1} w^{j+1}, & \text{if } i \geqslant 1 \text{ and } j =0,\\[2pt] 0, & \text{if } i=j=0. \end{cases} \end{align*} $$

Define $\mathcal {V}_0 = \mathcal {V}_0^+ = \operatorname {span}\{1\}$ and $\mathcal {V}_0^- = \{ 0\}$ . For $k\geqslant 1$ , let

$$ \begin{align*} \mathcal{V}_k &= \operatorname{span}\{ z^i w^{k-i} : 0 \leqslant i \leqslant k\}, && \text{so } \dim \mathcal{V}_k = k+1, \\ \mathcal{V}_{k}^+ &= \operatorname{span}\{z^i w^{k-i} + z^{k-i} w^i: 0 \leqslant i \leqslant {\lfloor \tfrac{k}{2}\rfloor}\}, && \text{so } \dim \mathcal{V}_k^+ = \lfloor \tfrac{k}{2} \rfloor + 1,\\ \mathcal{V}_{k}^- &= \operatorname{span}\{ z^i w^{k-i} - z^{k-i}w^i : 0 \leqslant i \leqslant {\lfloor \tfrac{k-1}{2}\rfloor}\}, && \text{so } \dim \mathcal{V}_k^- = \lfloor \tfrac{k-1}{2}\rfloor+1, \end{align*} $$

and note that $\dim \mathcal {V}_k = \dim \mathcal {V}_k^+ + \dim \mathcal {V}_k^-$ for $k\geqslant 1$ by a parity argument (or Proposition 2.11). Recall from (2.13) that $H^2(\mathbb {D}^2) = H^2_{\operatorname {sym}}(\mathbb {D}^2) \oplus H^2_{\operatorname {asym}}(\mathbb {D}^2)$ is an orthogonal direct sum. We have $\mathcal {V}_k = \mathcal {V}_k^+ \oplus \mathcal {V}_k^-$ for $k \geqslant 1$ , in which each $\mathcal {V}_k, \mathcal {V}_{k}^+ , \mathcal {V}_{k}^- $ is T-invariant, and

(8.2) $$ \begin{align} H^2(\mathbb{D}^2) = \bigoplus_{k=0}^{\infty} \mathcal{V}_k, \qquad H^2_{\operatorname{sym}}(\mathbb{D}^2) = \bigoplus_{k=0}^{\infty} \mathcal{V}_{k}^+ , \qquad H^2_{\operatorname{asym}}(\mathbb{D}^2) = \bigoplus_{k=1}^{\infty} \mathcal{V}_{k}^-. \end{align} $$

With respect to the orthonormal basis $\{z^{k-i} w^i\}_{i=0}^k$ of $\mathcal {V}_k$ , we identify the restriction $T|_{\mathcal {V}_k}$ with the $(k+1) \times (k+1)$ matrix (by convention $A_0 = [0]$ )

$$ \begin{align*} A_k =\tiny \begin{bmatrix}0 & \frac{1}{2} & & & & \\\frac{1}{2} & 0 & \frac{1}{2} & & &\\[2pt] & \frac{1}{2} & 0 & \ddots & &\\ & & \ddots & 0 & \frac{1}{2} &\\ & & & \frac{1}{2} & 0 &\frac{1}{2}\\ & & & & \frac{1}{2} & 0\end{bmatrix}. \end{align*} $$

From [Reference Kulkarni, Schmidt and Tsui21, Proposition 2.1], we have

(8.3) $$ \begin{align} \sigma (A_k) = \big\{ \cos \! \big( \tfrac{j \pi}{ k+2} \big) : j = 1,2, \ldots,k+1 \big\}. \end{align} $$

For k odd, with respect to the orthonormal basis $\{\frac {1}{\sqrt {2}}(z^{k-i} w^i+z^i w^{k-i})\}_{i=0}^{ \frac {k-1}{2}}$ of $\mathcal {V}_k^+$ , we identify the restriction $T|_{\mathcal {V}_k^+}$ with the $\frac {k+1}{2}\times \frac {k+1}{2}$ matrix

$$ \begin{align*} B_k =\tiny \begin{bmatrix} 0 & \frac{1}{2} & & & & \\ \frac{1}{2} & 0 & \frac{1}{2} & & &\\[2pt] & \frac{1}{2} & 0 & \ddots & &\\ & & \ddots & 0 & \frac{1}{2} &\\ & & & \frac{1}{2} & 0 & \frac{1}{2}\\[2pt] & & & & \frac{1}{2} & \frac{1}{2} \end{bmatrix}. \end{align*} $$

For k even, with respect to the orthonormal basis $\{\frac {1}{\sqrt {2}}(z^{k-i} w^i+z^i w^{k-i})\}_{i=0}^{ \frac {k}{2}-1}\cup \{z^{k/2}w^{k/2}\}$ of $\mathcal {V}_k^+$ , we identify the restriction $T|_{\mathcal {V}_k^+}$ with the $\left (\frac {k}{2}+1\right )\times \left (\frac {k}{2}+1\right )$ matrix

$$ \begin{align*} B_k = \tiny\begin{bmatrix} 0 & \frac{1}{2} & & & & \\ \frac{1}{2} & 0 & \frac{1}{2} & & &\\[2pt] & \frac{1}{2} & 0 & \ddots & &\\ & & \ddots & 0 & \frac{1}{2} &\\ & & & \frac{1}{2} & 0 & \frac{1}{\sqrt{2}}\\ & & & & \frac{1}{\sqrt{2}} & 0 \end{bmatrix} , \end{align*} $$

with the convention $B_0 = [0]$ . We identify the spectrum of the $B_k$ later.

With respect to the orthonormal basis $\{\frac {1}{\sqrt {2}}(z^i w^{k-i} - z^{k-i} w^i)\}_{i=0}^{\lfloor \frac {k-1}{2}\rfloor }$ of $\mathcal {V}_k^-$ , we identify the restriction $T|_{ \mathcal {V}_{k}^- }$ with the $\lfloor \frac {k+1}{2}\rfloor \times \lfloor \frac {k+1}{2}\rfloor $ matrix (note that $C_1 = [- \frac {1}{2}]$ and $C_2 = [0]$ )

$$ \begin{align*} C_k = \tiny \begin{bmatrix} 0 & \frac{1}{2} & & & & \\ \frac{1}{2} & 0 & \frac{1}{2} & & &\\[2pt] & \frac{1}{2} & 0 & \ddots & &\\ & & \ddots & 0 & \frac{1}{2} &\\ & & & \frac{1}{2} & 0 &\frac{1}{2}\\[1pt] & & & &\frac{1}{2} & \frac{(-1)^k - 1}{4} \end{bmatrix}. \end{align*} $$

For $k\geqslant 2$ even, $\sigma (C_k) = \{ \cos (\frac {2j \pi }{ k+2} ) : j = 1,2, \ldots , \frac {k}{2} \}$ [Reference Kulkarni, Schmidt and Tsui21, Proposition 2.1]. Suppose $k\geqslant 1$ is odd. By [Reference Kulkarni, Schmidt and Tsui21, equation (11)], $\lambda \in \sigma ( 2 C_k)$ if and only if $\lambda = -2 x $ , where $x\in [-1,1]$ solves

$$ \begin{align*} (-1 + 2x) \frac{\sin (\frac{1}{2}(k+1) \cos ^{-1}(x))}{\sin (\cos ^{-1} (x))} - \frac{\sin (\frac{1}{2}(k-1) \cos ^{-1}(x))}{\sin (\cos ^{-1} (x))} = 0. \end{align*} $$

Since $\cos (\frac {(2\ell -1 )\pi }{k+2})$ for $\ell = 1,2, \ldots ,\frac {k+1}{2}$ are the distinct solutions to this equation, $\sigma ( C_k) =\{ - \cos (\frac {(2\ell - 1)\pi }{k+2}) : \ell = 1,2, \ldots ,\frac {k+1}{2} \}$ . Since $- \cos {(x)} = \cos { (\pi - x)}$ , we can reindex and rewrite this as $\sigma (C_k) = \{ \cos (\tfrac {2j \pi }{k+2}) : j = 1,2, \ldots ,\tfrac {k+1}{2} \}$ . Regardless of the parity of k,

$$ \begin{align*} \sigma (C_k) = \big\{ \cos \! \big( \tfrac{2j \pi} { k+2} \big) : j = 1,2, \ldots, \big\lfloor \tfrac{k+1}{2} \big\rfloor \big\}. \end{align*} $$

Since $\mathcal {V}_k=\mathcal {V}_{k}^+ \oplus \mathcal {V}_{k}^- $ , up to unitary equivalence $A_k = B_k \oplus C_k$ . Thus,

(8.4) $$ \begin{align} \sigma (A_k) = \sigma (B_k) \cup \sigma (C_k). \end{align} $$

From (8.3)–(8.4), we obtain

$$ \begin{align*} \sigma (B_k) = \big\{ \cos\! \big( \tfrac{(2j-1) \pi} { k+2} \big) : j = 1,2, \ldots, \big\lfloor \tfrac{k+2}{2} \big\rfloor \big\}. \end{align*} $$

Since $S \odot S^*$ and $S \wedge S^*$ are self-adjoint and have norm at most $1$ , their spectra are contained in $[-1,1]$ . Up to unitary equivalence, (2.13) and (8.2) imply that

$$ \begin{align*} S \odot S^* = \bigoplus_{k=0}^{\infty} B_k \quad \text{and} \quad S \wedge S^* = \bigoplus_{k=1}^{\infty} C_k. \end{align*} $$

This yields the claimed point spectra of $S \odot S^*$ and $S \wedge S^*$ . A density argument reveals that $[-1,1] = \sigma _{\mathrm {p}}(S \odot S^*)^- \subseteq \sigma _{\mathrm {ap}}(S \odot S^*) \subseteq \sigma (S \odot S^*) \subseteq [-1,1]$ , so equality holds throughout. A similar argument treats $S \wedge S^*$ .

9 Shifts and diagonal operators

We consider here the symmetric tensor product of shift operators and diagonal operators. This setting suggests working on the sequence space $\ell ^2$ instead of $H^2(\mathbb {D})$ [Reference Garcia, Mashreghi and Ross12, Section 1.2]. Let $\mathbf {e}_0, \mathbf {e}_1, \ldots $ be the standard basis of $\ell ^2$ and consider the unilateral shift $S \mathbf {e}_i = \mathbf {e}_{i+1}$ [Reference Garcia, Mashreghi and Ross12, Chapter 3]. Its adjoint is given by $S^* \mathbf {e}_i = \mathbf {e}_{i-1}$ for $i \geqslant 1$ and $S^* \mathbf {e}_0 = 0$ .

Theorem 9.1 Let $M = \operatorname {diag}(\mu _0,\mu _1,\ldots )$ be a bounded diagonal operator on $\ell ^2$ .

  1. (a) $\frac {1}{\sqrt {2}} \| M \| \leqslant \| S \odot M \| \leqslant \| M \|$ . Both inequalities are sharp.

  2. (b) If some $\mu _i = 0$ or the set of nonzero $\mu _i$ is bounded away from $0$ , then $0 \in \sigma _{\mathrm {p}}(S \odot M)$ .

  3. (c) $\sigma _{\mathrm {p}} (S \odot M) \subseteq \{ 0 \}$ .

Proof (a) Since $M^* = \operatorname {diag}(\overline {\mu _0},\overline {\mu _1},\ldots )$ is a diagonal operator and $(S \odot M^*)^* = S^* \odot M$ by Proposition 4.6, this follows from Theorem 9.2(a) below.

(b) Suppose that the set of nonzero $\mu _i$ is bounded away from zero. Note that for all $i,j \geqslant 0$ ,

(9.2) $$ \begin{align} (S \odot M)({\mathbf{e}_i} \odot {\mathbf{e}_j}) = \frac{\mu_j}{2}{\mathbf{e}_{i+1}} \odot {\mathbf{e}_j}+\frac{\mu_i}{2}{\mathbf{e}_i} \odot {\mathbf{e}_{j+1}}. \end{align} $$

If some $\mu _i = 0$ , then (9.2) ensures that $0 \in \sigma _{\mathrm {p}}(S \odot M)$ since $(S \odot M)( \mathbf {e}_i \odot \mathbf {e}_i) = 0 $ . Thus, we may assume that $|\mu _i| \geqslant \delta> 0$ for all $i\geqslant 0$ . Define $C = \sqrt {\| M \|/\delta }$ .

Let $\sum _{i \leqslant j } | a_{ij}|^2 < \infty $ and let $\mathbf {v} = 2\sum _{0 \leqslant i \leqslant j< \infty } a_{ij} \mathbf {e}_i \odot \mathbf {e}_j$ , which is well defined by Lemma 2.14. Then (9.2) ensures that

(9.3) $$ \begin{align} (S \odot M) \mathbf{v} = \sum_{0 \leqslant i \leqslant j< \infty} a_{ij}\left( \mu_j \mathbf{e}_{i+1}\odot \mathbf{e}_j + \mu_i \mathbf{e}_i \odot \mathbf{e}_{j+1} \right). \end{align} $$

When (9.3) is expanded, the coefficient of $\mathbf {e}_k \odot \mathbf {e}_{\ell }$ for $k \leqslant \ell $ is

(9.4) $$ \begin{align} &0, && \text{if } k = \ell = 0,\nonumber \\& 2\mu_0 a_{0, 0}, &&\text{if } k=0 \text{ and } \ell = 1,\nonumber \\& \mu_0 a_{0, \ell-1}, &&\text{if } k=0 \text{ and } \ell \geqslant 2, \end{align} $$
(9.5) $$ \begin{align} &\mu_k a_{k-1, k}, &&\text{if } 1 \leqslant k=\ell, \end{align} $$
(9.6) $$ \begin{align} &2\mu_k a_{k, k} + \mu_{k+1} a_{k-1, k+1}, &&\text{if } k \geqslant 1 \text{ and } \ell = k+1, \end{align} $$
(9.7) $$ \begin{align} &\mu_{k} a_{k, \ell-1} + \mu_{\ell} a_{k-1, \ell}, &&\text{if } k \geqslant 1 \text{ and } \ell \geqslant k+2. \end{align} $$

Then $(S \odot M)\mathbf {v} = \mathbf {0}$ if and only if the $a_{k,\ell }$ are square summable and (9.4)–(9.7) vanish for all $\ell \geqslant k \geqslant 0$ . We define such $a_{k,\ell }$ , not all zero, in four steps (see Figure 1).

  1. (1) Let $a_{0,0} = 0$ . For $\ell \geqslant 1$ , let $a_{0,\ell -1} = 0$ so that (9.4) vanishes.

  2. (2) For each $k \geqslant 2$ , let $a_{k-1,k} = a_{k-1,k+2} = a_{k-1,k+4} = \cdots = 0$ . Then (9.5) and (9.7) vanish for $k \geqslant 2$ and even $\ell \geqslant k+2$ .

  3. (3) Let $a_{1,1}=0$ and, for each $k \geqslant 2$ , let $a_{k,k}$ and $a_{k-1,k+1}$ be such that

    (9.8) $$ \begin{align} \begin{bmatrix} 2a_{k,k} \\ a_{k-1,k+1} \end{bmatrix} \perp \begin{bmatrix} \overline{\mu_k} \\ \overline{\mu_{k+1}} \end{bmatrix} \quad \text{and} \quad 0< \left\| \begin{bmatrix} 2a_{k,k} \\ a_{k-1,k+1} \end{bmatrix} \right \| < \frac{1}{(k+1)^{3/2}}. \end{align} $$
    Then (9.6) vanishes for $k \geqslant 2$ and $\ell = k+1$ .
  4. (4) For $k \geqslant 2$ , let

    (9.9) $$ \begin{align} a_{k-1, k+3} = - a_{k, k+2} \frac{\mu_k}{\mu_{k+3}}, \qquad a_{k-1, k+5}= - a_{k, k+4} \frac{\mu_k}{\mu_{k+5}},\ldots. \end{align} $$
    Then (9.7) vanishes for all $k \geqslant 1$ with odd $\ell \geqslant k+3$ .

This completes the definition of the $a_{k,\ell }$ . We must prove that they are square summable.

Figure 1: Colors denote the step where the $a_{k,\ell }$ are fixed: Step (1) is in violet; (2) is in red; (3) is in green; and (4) is in blue. The symmetry of symmetric tensors permits us to focus on $\ell \geqslant k \geqslant 0$ . The violet and red values are zero.

For $k \geqslant 1$ , (9.8) yields

(9.10) $$ \begin{align} |a_{k,k}|^2 < \frac{1}{(k+1)^3} \quad \text{and} \quad |a_{k-1,k+1}|^2 < \frac{1}{(k+1)^3}. \end{align} $$

Then (9.9) and then (9.10) with $k+1$ in place of k ensure that

(9.11) $$ \begin{align} |a_{k-1,k+3}|^2 = |a_{k, k+2}|^2 \left| \frac{\mu_{k}}{\mu_{k+3}} \right|{}^2 \leqslant C|a_{k, k+2}|^2 \leqslant \frac{C}{(k+2)^3} \end{align} $$

for $k \geqslant 1$ . Next (9.9) and then (9.11) with $k+1$ in place of k imply that

$$ \begin{align*} |a_{k-1,k+5}|^2 = |a_{k, k+4}|^2 \left| \frac{\mu_{k}}{\mu_{k+5}} \right|{}^2 \leqslant C|a_{k, k+4}|^2 \leqslant \frac{C}{(k+3)^3}, \end{align*} $$

so induction yields

(9.12) $$ \begin{align} | a_{k,k+2r} |^2 \leqslant \frac{C}{(k + r)^3}. \end{align} $$

Then Step 1, Step 2, and (9.12) ensure that

$$ \begin{align*} \sum_{0\leqslant k \leqslant \ell} |a_{k,\ell}|^2 = \sum_{k=1}^{\infty} \sum_{\ell \geqslant k} |a_{k,\ell}|^2 = \sum_{k=1}^{\infty} \sum_{r=0}^{\infty} |a_{k,k+2r}|^2 \leqslant C \sum_{k=1}^{\infty} \sum_{r=0}^{\infty} \frac{1}{(k + r)^3} , \end{align*} $$

which is finite by a standard argument in the study of elliptic functions [Reference Simon31, Proposition 10.4.2].Footnote 1 Thus, $\mathbf {v}$ is a well-defined vector in the kernel of $S \odot M$ .

(c) Suppose that $\lambda \neq 0$ and $(S \odot M) \mathbf {v} = \lambda \mathbf {v}$ , in which $\mathbf {v} = 2\sum _{0 \leqslant i \leqslant j< \infty } a_{ij} \mathbf {e}_i \odot \mathbf {e}_j$ and $\sum _{0\leqslant i \leqslant j < \infty } | a_{ij}|^2 < \infty $ . Then (9.2) ensures that

(9.13) $$ \begin{align} \mathbf{0} &= ((S \odot M) -\lambda I ) \mathbf{v} = 2\sum_{0 \leqslant i \leqslant j< \infty} a_{i,j} ((S \odot M) -\lambda I )(\mathbf{e}_i\odot \mathbf{ e}_j)\nonumber \\ &=\!\!\sum_{0 \leqslant i \leqslant j< \infty} \!\!\!\! \!\! a_{i,j}\mu_j \mathbf{e}_{i+1}\odot{\mathbf{e}_j} + \!\!\!\! \sum_{0 \leqslant i \leqslant j< \infty} \!\!\!\!\!\! a_{i,j}\mu_i \mathbf{e}_i\odot \mathbf{ e}_{j+1} - \!\!\!\! \sum_{0 \leqslant i \leqslant j< \infty} \!\!\!\! \!\! \lambda a_{i,j}{\mathbf{e}_i}\odot{\mathbf{ e}_j}. \end{align} $$

When (9.13) is expanded, the coefficient of $\mathbf {e}_k \odot \mathbf {e}_{\ell }$ for $k \leqslant \ell $ is

(9.14) $$ \begin{align} 0&=-\lambda a_{0,0},&& \text{if } k = \ell = 0, \\0&= 2\mu_0 a_{0, 0} -\lambda a_{0,1}, &&\text{if } k=0 \text{ and } \ell = 1,\nonumber \end{align} $$
(9.15) $$ \begin{align} 0=& 2\mu_0 a_{0, 0} -\lambda a_{0,1}, &&\text{if } k=0 \text{ and } \ell = 1,\nonumber\\ 0=& \mu_0 a_{0, \ell-1} -\lambda a_{0,\ell}, &&\text{if } k=0 \text{ and } \ell \geqslant 2, \end{align} $$
(9.16) $$ \begin{align} 0=&\mu_k a_{k-1, k} - \lambda a_{k,k}, &&\text{if } 1 \leqslant k=\ell, \end{align} $$
(9.17) $$ \begin{align} 0=&2\mu_k a_{k, k} + \mu_{k+1} a_{k-1, k+1} -\lambda a_{k,k+1},&&\text{if } k \geqslant 1 \text{ and } \ell = k+1, \end{align} $$
(9.18) $$ \begin{align} 0=&\mu_{k} a_{k, \ell-1} + \mu_{\ell} a_{k-1, \ell} - \lambda a_{k,\ell}, &&\text{if } k \geqslant 1 \text{ and } \ell \geqslant k+2. \end{align} $$

We use induction to prove that $a_{k,\ell } = 0$ for $0 \leqslant k \leqslant \ell < \infty $ . For $k \geqslant 0$ , let $\texttt {P}(k)$ be the statement “ $a_{k, k+i} = 0$ for all $i \geqslant 0$ .” The truth of $\texttt {P}(0)$ follows from (9.14), which ensures that $a_{0,0}=0$ , and induction on $\ell $ using (9.15), which yields $a_{0,\ell } = 0$ for $\ell \geqslant 0$ .

Suppose $\texttt {P}(k-1)$ is true: $a_{k-1,\ell } = 0$ for $\ell \geqslant k-1$ . Then (9.16) yields $\lambda a_{k,k} = \mu _{k} a_{k-1,k} = 0$ , so $a_{k,k}=0$ . Next, (9.17) ensures that $\lambda a_{k,k+1} =2 \mu _{k} a_{k,k} + \mu _{k+1} a_{k-1,k+1} = 0$ , so $a_{k,k+1}=0$ . Finally, (9.18) and induction on $\ell $ tell us that $\lambda a_{k,\ell } = \mu _{k} a_{k, \ell -1} + \mu _{\ell } a_{k-1, \ell } = 0$ for $\ell \geqslant k+2$ . Thus, $\texttt {P}(k)$ is true, so $\mathbf {v} = \mathbf {0}$ and $\lambda \notin \sigma _{\mathrm {p}}(S \odot M)$ .

Theorem 9.2 Let $M = \operatorname {diag}(\mu _0,\mu _1,\ldots )$ be a bounded diagonal operator on $\ell ^2$ .

  1. (a) $\frac {1}{\sqrt {2}} \| M \| \leqslant \| S^* \odot M \| \leqslant \| M \|$ . Both inequalities are sharp.

  2. (b) $\{ |z| < \frac {1}{2}|\mu _0| \} \cup \{0\}\subset \sigma _{\mathrm {p}} ( S^* \odot M ) $ .

Proof (a) Since $\|S^*\| = 1$ , Theorem 3.4 yields $\| S^* \odot M \| \leqslant \| M \|$ . Equality holds for $M= I$ because $\sigma (S^*) = \mathbb {D}^-$ [Reference Garcia, Mashreghi and Ross12, Proposition 5.2.4.a] and $\sigma (S^*\odot I) = \frac {1}{2}(\sigma (S^*)+\sigma (S^*)) \subseteq \mathbb {D}^-$ by Theorem 6.3. Thus, $\| S^* \odot I \| \geqslant 1 = \| I \|$ .

Suppose that $M \mathbf {e}_i = \mu _i \mathbf {e}_i$ for $i\geqslant 0$ . Then

(9.20) $$ \begin{align} (S^* \odot M) (\mathbf{e}_i \odot \mathbf{e}_j) = \begin{cases} \frac{1}{2}( \mu_j \mathbf{e}_{i-1}\odot \mathbf{e}_j+\mu_i \mathbf{e}_i\odot \mathbf{e}_{j-1} ), & \text{if } i,j\neq 0,\\[3pt] \frac{1}{2}( \mu_i \mathbf{e}_i\odot \mathbf{e}_{j-1} ), & \text{if } 0 = i < j,\\[3pt] \frac{1}{2}( \mu_j \mathbf{e}_{i-1}\odot \mathbf{e}_j ), & \text{if } 0=j<i,\\[3pt] \mathbf{0}, & \text{if } i=j=0. \end{cases} \end{align} $$

For each $\epsilon>0$ , there is a $\mu _i$ such that $\| M \|-\epsilon \leqslant |\mu _i|$ . Then (9.20) ensures that

$$ \begin{align*} \frac{\| M \|-\epsilon}{\sqrt{2}}\leqslant \frac{| \mu_i |}{\sqrt{2}} =\| \mu_i \mathbf{e}_{i-1} \odot \mathbf{e}_{i} \| = \| (S^*\odot M)( \mathbf{e}_{i} \odot \mathbf{e}_{i}) \| \leqslant \| S^* \odot M \|. \end{align*} $$

Let $\epsilon \to 0$ to obtain the desired lower bound.

If $\mu _i = \delta _{i0}$ for all $i\geqslant 0$ , then $\| (S^* \odot M)(\sqrt {2} \mathbf {e}_0 \odot \mathbf {e}_1) \|=\| \frac {1}{\sqrt {2}}( \mathbf {e}_0 \odot \mathbf { e}_0 ) \| = \frac {1}{\sqrt {2}}$ and $\| M \|=1$ , so the lower bound is sharp.

(b) If $\mu _0 = 0$ , the last line of (9.20) ensures that $0 \in \sigma _{\mathrm {p}}(S^* \odot M)$ . Let $\mu _0 \neq 0$ and $| \lambda | < \frac {1}{2}|\mu _0|$ . Lemma 2.14 permits us to define $\mathbf {v} = \sum _{j=0}^{\infty } \frac {(2\lambda )^j}{\mu _0^j} \mathbf {e}_0 \odot \mathbf {e}_j$ . Then (9.20) ensures that $\lambda \in \sigma _{\mathrm {p}}(S^*\odot M)$ since

$$ \begin{align*} (S^* \odot M)\mathbf{v} &= (S^* \odot M)\Big( \sum_{j=0}^{\infty} \frac{(2\lambda)^j}{\mu_0^j} \mathbf{e}_0 \odot \mathbf{e}_j \Big) = \sum_{j=0}^{\infty} \frac{(2\lambda)^j}{\mu_0^j} (S^* \odot M) (\mathbf{e}_0 \odot \mathbf{e}_j ) \\ &=\frac{1}{2} \sum_{j=0}^{\infty} \frac{(2\lambda)^j}{\mu_0^j} \mu_0 \mathbf{e}_0 \odot \mathbf{e}_{j-1} = \lambda \sum_{j=1}^{\infty} \frac{(2\lambda)^{j-1}}{\mu_0^{j-1}} \mathbf{e}_0 \odot \mathbf{e}_{j-1} = \lambda \mathbf{v}.\\[-43pt] \end{align*} $$

10 Questions for further research

We conclude with questions to spur future research. Some are general, others specific. Perhaps the answers to a few are buried in the literature, although we did not find them.

Lemma 2.15 prompts us to consider symmetric tensor products of more than two vectors. If $\mathbf {x}_1,\mathbf {x}_2,\ldots , \mathbf {x}_n \in \mathcal {H}$ , then $\| \mathbf {x}_1 \odot \mathbf {x}_2 \odot \cdots \odot \mathbf {x}_n \| = \| \mathrm {S}_n ( \mathbf {x}_1 \otimes \mathbf {x}_2 \otimes \cdots \otimes \mathbf {x}_n) \| \leqslant \| \mathbf {x}_1 \otimes \mathbf {x}_2 \otimes \cdots \otimes \mathbf {x}_n \| = \| \mathbf {x}_1 \| \| \mathbf {x}_2 \| \cdots \| \mathbf {x}_n \|$ . Equality occurs if $\mathbf {x}_1 = \mathbf {x}_2 = \cdots = \mathbf {x}_n$ . Thus, only lower bounds on $\| \mathbf {x}_1 \odot \mathbf {x}_2 \odot \cdots \odot \mathbf {x}_n \|$ are of interest. Here is a partial answer.

Lemma 10.1 $\frac {1}{\sqrt {6}} \| \mathbf {x}_1 \| \| \mathbf {x}_2 \| \| \mathbf {x}_3 \| \leqslant \| \mathbf {x}_1 \odot \mathbf {x}_2 \odot \mathbf {x}_3 \| \leqslant \| \mathbf {x}_1 \| \| \mathbf {x}_2 \| \| \mathbf {x}_3 \|$ for $\mathbf {x}_1, \mathbf {x}_2, \mathbf {x}_3 \in \mathcal {H}$ . These inequalities are sharp.

Proof The upper bound is discussed above. Without loss of generality, suppose $\mathbf {x}_1, \mathbf {x}_2, \mathbf {x}_3 $ have unit norm. Then

(10.2) $$ \begin{align} \!\!\!\!\!\!\!\!\!\!36\| \mathbf{x}_1 \odot \mathbf{x}_2 \odot \mathbf{x}_3 \|^2 &= \sum_{\tau,\pi \in \Sigma_3} \langle \mathbf{x}_{\tau(1)} \otimes \mathbf{x}_{\tau(2)} \otimes \mathbf{ x}_{\tau(3)} , \mathbf{x}_{\pi(1)} \otimes \mathbf{x}_{\pi(2)} \otimes \mathbf{x}_{\pi(3)} \rangle \nonumber\\ & = 6 + \underbrace{\sum_{\tau \neq \pi} \langle \mathbf{x}_{\tau(1)} \otimes \mathbf{x}_{\tau(2)} \otimes \mathbf{x}_{\tau(3)} , \mathbf{x}_{\pi(1)} \otimes \mathbf{x}_{\pi(2)} \otimes \mathbf{x}_{\pi(3)}}_{c} \rangle , \end{align} $$

in which $c\in \mathbb {R}$ is of the form

$$ \begin{align*} c &= 6 (| \langle \mathbf{x}_2, \mathbf{x}_3 \rangle |^2 + | \langle \mathbf{x}_1, \mathbf{x}_2 \rangle |^2 + | \langle \mathbf{x}_1, \mathbf{x}_3 \rangle |^2 )\nonumber \\ &\qquad+ \underbrace{6 \langle \mathbf{x}_1, \mathbf{x}_2 \rangle \langle \mathbf{x}_2, \mathbf{x}_3 \rangle \langle \mathbf{x}_3, \mathbf{x}_1 \rangle + 6 \langle \mathbf{x}_1, \mathbf{x}_3 \rangle \langle \mathbf{x}_3, \mathbf{x}_2 \rangle \langle \mathbf{x}_2, \mathbf{x}_1 \rangle}_{d}. \end{align*} $$

Muirhead’s inequality [Reference Hardy, Littlewood and Pólya17, Chapter 2, Section 18] shows that for $x,y,z \in [0,1]$ ,

(10.3) $$ \begin{align} x^2 + y^2 + z^2 \geqslant 2 xyz. \end{align} $$

Let $x = |\langle \mathbf {x}_1, \mathbf {x}_2\rangle |$ , $y = |\langle \mathbf {x}_2, \mathbf {x}_3\rangle |$ , and $z = |\langle \mathbf {x}_3, \mathbf {x}_1\rangle | $ in (10.3) and get (since $d\in \mathbb {R}$ )

$$ \begin{align*} 6 \big(| \langle \mathbf{x}_2, \mathbf{x}_3\rangle |^2 + | \langle \mathbf{x}_1, \mathbf{x}_2\rangle |^2 + |\langle \mathbf{x}_1, \mathbf{x}_3\rangle |^2 \big) \geqslant 12 |\langle \mathbf{x}_1, \mathbf{x}_2\rangle| |\langle \mathbf{x}_2, \mathbf{x}_3 \rangle| |\langle \mathbf{x}_3, \mathbf{x}_1 \rangle| \geqslant - d. \end{align*} $$

Thus, $c \geqslant 0$ and we obtain the desired lower bound. If $\mathbf {x}_1, \mathbf {x}_2, \mathbf {x}_3$ are pairwise orthogonal, then $c=0$ in (10.2), so the lower bound is sharp.

Problem 1 Is $\frac {1}{\sqrt {n!}} \| \mathbf {x}_1 \| \| \mathbf {x}_2 \| \cdots \| \mathbf {x}_n \| \leqslant \| \mathbf {x}_1 \odot \mathbf {x}_2 \odot \cdots \odot \mathbf {x}_n \|$ for $\mathbf {x}_1, \mathbf {x}_2, \ldots ,\mathbf {x}_n \in \mathcal {H}$ ?

Lemma 10.1 leads to an analogue of Theorem 5.1(a) for three operators.

Theorem 10.4 $ \displaystyle \frac {1}{\sqrt {6}}\sup _{\substack {\mathbf {x} \in \mathcal {H} \\ \|\mathbf {x}\| = 1}} \big \{ { \|A\mathbf {x}\| \|B\mathbf {x}\| \| C\mathbf {x} \|} \big \} \leqslant \| A \odot B \odot C \|$ for $A, B, C \in \mathcal {B}( \mathcal {H} )$ .

Problem 2 For $A_1, A_2, \ldots , A_n \in \mathcal {B}(\mathcal {H})$ is

$$ \begin{align*} \frac{1}{\sqrt{n!}}\sup_{ \substack{\mathbf{x} \in \mathcal{H} \\ \|\mathbf{x}\| = 1}} \big\{ { \|A_1\mathbf{x}\| \|A_2\mathbf{x}\|\cdots \| A_n \mathbf{x} \|} \big\} \leqslant\| A_1 \odot A_2 \odot \cdots \odot A_n \| ? \end{align*} $$

Proposition 7.2 provides the sharp inequalities $ \|L\| \| M \| (\sqrt {2}-1) \leqslant \| L \odot M \| \leqslant \|L\| \|M\|$ for diagonal operators $L,M$ (with respect to the same orthonormal basis). Since the upper bound easily generalizes, the lower bound is of greater interest.

Problem 3 Let $A_1, A_2, \ldots , A_n \in \mathcal {B}(\mathcal {H})$ be diagonal operators (with respect to the same orthonormal basis). Find a sharp lower bound, in the spirit of Proposition 7.2, on $\| A_1 \odot A_2 \odot \cdots \odot A_n \|$ in terms of $\| A_1 \|, \| A_2 \|,\ldots ,\| A_n \|$ .

The Weyl–von Neumann–Berg theorem asserts that every normal operator on a separable Hilbert space is the sum of a diagonal operator and a compact operator of arbitrarily small norm [Reference Davidson8, Corollary II.4.2]. This suggests possible extensions to normal operators.

Problem 4 Let $A_1, A_2, \ldots , A_n \in \mathcal {B}(\mathcal {H})$ be commuting normal operators. Find a sharp lower bound, in the spirit of Proposition 7.2, on $\| A_1 \odot A_2 \odot \cdots \odot A_n \|$ in terms of $\| A_1 \|, \| A_2 \|,\ldots ,\| A_n \|$ .

Proposition 7.5 suggests the following.

Problem 5 Let $A_1, A_2, \ldots , A_n \in \mathcal {B}(\mathcal {H})$ be commuting normal operators. Describe $\sigma (A_1 \odot A_2 \odot \cdots \odot A_n)$ (and its parts).

Let us now consider the unilateral shift S and its adjoint. Theorem 8.1 identified the norm and spectrum of $S \odot S^*$ and $S \wedge S^*$ . What can be said about other combinations?

Problem 6 Identify the norm and spectrum of arbitrary symmetric or antisymmetric tensor products of S and $S^*$ (for example, consider $S^2 \odot S \odot S^{*3}$ and $S^2 \wedge S \wedge S^{*3}$ ).

Problem 7 Describe the norm and spectrum of $S_{\alpha } \odot S_{\alpha }^*$ and $S_{\alpha } \wedge S_{\alpha }^*$ , in which $S_{\alpha }$ is a weighted shift operator. What can be said if more factors are included?

Theorems 9.1 and 9.2 answer some questions about $S \odot M$ and $S^* \odot M$ , in which $M=\operatorname {diag}(\mu _0,\mu _1,\ldots )$ is a diagonal operator. However, a complete picture eludes us.

Problem 8 Identify the norm and spectrum (and its parts) for $S \odot M$ and $S^* \odot M$ .

The general problem suggested by the previous questions is the following.

Problem 9 For $A_1,A_2,\ldots ,A_n \in B(\mathcal {H})$ , describe the norm and spectrum (and its parts) of $A_1 \odot A_2 \odot \cdots \odot A_n$ and $A_1 \wedge A_2 \wedge \cdots \wedge A_n$ .

There are countless other questions that can be raised. For example, what can be said about symmetric tensor products of composition operators?

Footnotes

Garcia is supported by NSF Grant DMS-2054002. O’Loughlin is grateful to EPSRC for financial support.

1 The double sum can be explicitly evaluated. Write the summands in an array with r indexing the columns and k the rows. Sum each column and simplify to reduce the double sum to the well-known $\sum _{k=1}^{\infty } \frac {1}{k^2} = \frac {\pi ^2}{6}$ .

References

Bhatia, R., Matrix analysis, Graduate Texts in Mathematics, 169, Springer, New York, 1997.CrossRefGoogle Scholar
Brown, A. and Pearcy, C., Spectra of tensor products of operators . Proc. Amer. Math. Soc. 17(1966), 162166.CrossRefGoogle Scholar
Carroll, S., Spacetime and geometry: an introduction to general relativity, Addison Wesley, San Francisco, CA, 2004.Google Scholar
Comon, P., Tensor decompositions . In: Mathematics in signal processing V, Clarendon Press, Oxford, 2002, pp. 124.Google Scholar
Comon, P., Golub, G., Lim, L., and Mourrain, B., Symmetric tensors and symmetric tensor rank . SIAM J. Matrix Anal. Appl. 30(2008), no. 3, 12541279.CrossRefGoogle Scholar
Comon, P. and Rajih, M., Blind identification of under-determined mixtures based on the characteristic function . Signal Process. 86(2006), no. 9, 22712281.CrossRefGoogle Scholar
Conway, J. B., A course in functional analysis, 2nd ed., Graduate Texts in Mathematics, 96, Springer, New York, 1990.Google Scholar
Davidson, K. R., C*-algebras by example, Fields Institute Monographs, 6, American Mathematical Society, Providence, RI, 1996.CrossRefGoogle Scholar
De Lathauwer, L., De Moor, B., and Vandewalle, J., A multilinear singular value decomposition . SIAM J. Matrix Anal. Appl. 21(2000), no. 4, 12531278.CrossRefGoogle Scholar
Douglas, R. G. and Yang, R., Operator theory in the Hardy space over the bidisk. I . Integr. Equ. Oper. Theory 38(2000), no. 2, 207221.CrossRefGoogle Scholar
Fulton, W. and Harris, J., Representation theory: a first course, Graduate Texts in Mathematics, 129 (Readings in Mathematics), Springer-Verlag, New York, 1991.Google Scholar
Garcia, S. R., Mashreghi, J., and Ross, W. T., Operator theory by example, Oxford Graduate Texts in Mathematics, 30, Oxford University Press, Oxford, 2023.CrossRefGoogle Scholar
Garcia, S. R., Prodan, E., and Putinar, M., Mathematical and physical aspects of complex symmetric operators . J. Phys. A. 47(2014), no. 35, Article no. 353001, 54 pp.CrossRefGoogle Scholar
Garcia, S. R. and Putinar, M., Complex symmetric operators and applications . Trans. Amer. Math. Soc. 358(2006), no. 3, 12851315.CrossRefGoogle Scholar
Garcia, S. R. and Putinar, M., Complex symmetric operators and applications. II . Trans. Amer. Math. Soc. 359(2007), no. 8, 39133931.CrossRefGoogle Scholar
Greub, W., Multilinear algebra, 2nd ed., Universitext, Springer, New York–Heidelberg, 1978.CrossRefGoogle Scholar
Hardy, G. H., Littlewood, J. E., and Pólya, G., Inequalities, 2nd ed., Cambridge University Press, Cambridge, 1952.Google Scholar
Ibort, A., Marmo, G., and Pérez-Pardo, J. M., Boundary dynamics driven entanglement . J. Phys. A Math. Theor. 47(2014), no. 38, 385301.CrossRefGoogle Scholar
Ibort, A. and Pérez-Pardo, J. M., On the theory of self-adjoint extensions of symmetric operators and its applications to quantum physics . Int. J. Geom. Methods Mod. Physics 12(2015), no. 6, 1560005.CrossRefGoogle Scholar
Kostrikin, A. I. and Manin, Y. I., Linear algebra and geometry, English ed., Algebra, Logic and Applications, 1, Gordon and Breach Science, Amsterdam, 1997, Translated from the second Russian (1986) edition by M. E. Alferieff.Google Scholar
Kulkarni, D., Schmidt, D., and Tsui, S., Eigenvalues of tridiagonal pseudo-Toeplitz matrices . Linear Algebra Appl. 297(1999), nos. 1–3, 6380.CrossRefGoogle Scholar
Lenz, D., Weinmann, T., and Wirth, M., Self-adjoint extensions of bipartite Hamiltonians . Proc. Edinb. Math. Soc. (2) 64(2021), no. 3, 433447.CrossRefGoogle Scholar
McCullagh, P., Tensor methods in statistics, Monographs on Statistics and Applied Probability, Chapman & Hall, London, 1987.Google Scholar
Naber, G. L., Quantum mechanics: an introduction to the physical background and mathematical structure, De Gruyter, Berlin, 2021.CrossRefGoogle Scholar
Putnam, C. R., On normal operators in Hilbert space . Amer. J. Math. 73(1951), 357362.CrossRefGoogle Scholar
Riemann, B.. Bernhard Riemann “Über die Hypothesen, welche der Geometrie zu Grunde liegen”, Klassische Texte der Wissenschaft [Classical Texts of Science], Springer Spektrum, Berlin, Heidelberg, 2013, Historical and mathematical commentary by Jürgen Jost.CrossRefGoogle Scholar
Riemann, B.. On the hypotheses which lie at the bases of geometry, Classic Texts in the Sciences, Birkhäuser/Springer, Cham, 2016, Edited and with commentary by Jürgen Jost, Expanded English translation of the German original.CrossRefGoogle Scholar
Rudin, W., Real and complex analysis, 2nd ed., McGraw-Hill Series in Higher Mathematics, McGraw-Hill, New York–Düsseldorf–Johannesburg, 1974.Google Scholar
Schechter, M., On the spectra of operators on tensor products . J. Functional Analysis 4(1969), 9599.CrossRefGoogle Scholar
Sidiropoulos, N. D., Bro, R., and Giannakis, G. B., Parallel factor analysis in sensor array processing . IEEE Trans. Signal Process. 48(2000), no. 8, 23772388.CrossRefGoogle Scholar
Simon, B., Basic complex analysis: a comprehensive course in analysis, part 2A, American Mathematical Society, Providence, RI, 2015.CrossRefGoogle Scholar
Simon, Barry. Real analysis: a comprehensive course in analysis, part 1, American Mathematical Society, Providence, RI, 2015, With a 68 page companion booklet.CrossRefGoogle Scholar
Smilde, A. K., Geladi, P., and Bro, R., Multi-way analysis: applications in the chemical sciences, John Wiley & Sons, Hoboken, NJ, 2005.Google Scholar
Zhang, D., Wang, Q., and Gong, J., Quantum geometric tensor in PT-symmetric quantum mechanics . Phys. Rev. A. 99(2019), no. 4, 042104.CrossRefGoogle Scholar
Figure 0

Figure 1: Colors denote the step where the $a_{k,\ell }$ are fixed: Step (1) is in violet; (2) is in red; (3) is in green; and (4) is in blue. The symmetry of symmetric tensors permits us to focus on $\ell \geqslant k \geqslant 0$. The violet and red values are zero.