1 Introduction
The decomposition of normed linear spaces into direct sums and the analysis of the associated projection operators is central to important chapters in the theory of modern and classical Banach spaces. In a seminal paper, Lindenstrauss [Reference Lindenstrauss23] set forth an influential research program aiming at detailed investigations of complemented subspaces and operators on Banach spaces.
The main question addressed by Lindenstrauss was this: Which are the spaces X that cannot be further decomposed into two `essentially different, infinitedimensional subspaces'? That is to say, which are the Banach spaces X that are not isomorphic to the direct sum of two infinitedimensional spaces Y and Z, where neither Y nor Z is isomorphic to X? This condition would be satisfied if X were indecomposable: that is, for any decomposition of X into two spaces, one of them has to be finitedimensional. Separately, such a space could be primary, meaning that for any decomposition of X into two spaces, one of them has to be isomorphic to $X.$ The first example of an indecomposable Banach space was constructed by Gowers and Maurey [Reference Gowers and Maurey18], who also showed that their space $X_{\mathrm {GM}} $ is not primary – the infinitedimensional component of $X_{\mathrm {GM}} \sim X\oplus Y $ is not isomorphic to the whole space.
While indecomposable spaces play a tremendous role [Reference Argyros and Haydon3, Reference Gowers and Maurey18, Reference Maurey27] in the presentday study of nonclassical Banach spaces, a wide variety of Banach function spaces may usually be decomposed, for instance, by restriction to subsets, by taking conditional expectations, and so on. This provides the background for the program set forth by Lindenstrauss to determine the ‘classical’ spaces that are primary.
1.1 Background and history
The term ‘classical Banach space’ – while not formally defined – certainly applies to the space $C[0,1]$ and to scalar and vectorvalued Lebesgue spaces. The space of continuous functions was shown to be primary by Lindenstrauss and Pełczyński [Reference Lindenstrauss and Pełczyński24], who posed the corresponding problem for scalarvalued $L_p$ spaces. Its elegant solution by Enflo via Maurey [Reference Maurey26] introduced a groundbreaking method of proof that applies equally well to each of the $L_ p $ spaces $ (1 \le p < \infty )$ . Later alternative proofs were given by Alspach, Enflo and Odell [Reference Alspach, Enflo and Odell1] for $L_p $ in the reflexive range $1<p<\infty $ and by Enflo and Starbird [Reference Enflo and Starbird16] for $L_1$ .
Exceptionally deep results on the decomposition of BochnerLebesgue spaces $L_p(X)$ are due to Capon [Reference Capon12, Reference Capon11], who obtained that those spaces are primary in the following cases:

 X is a Banach space with a symmetric basis, and $1 \le p < \infty $ .

 $ X = L_ q $ , where $ 1< q < \infty $ and $ 1< p < \infty $ .
This leaves the spaces $ L_1(L_p) $ and $ L_p(L_1) $ among the most prominent examples of classical Banach spaces for which primariness is open.
After Capon’s paper [Reference Capon12], the focus was concentrated mostly on nonseparable Banach spaces, where Bourgain [Reference Bourgain7] developed a very flexible method based on localisation to a sequence of quantitative finitedimensional factorisation problems. This method led to results like the primariness of $\mathcal {L}(\ell _2)$ [Reference Blower6] and the primariness of $\mathrm {BMO}$ and its predual $H_1$ by the third author [Reference Müller29].
The purpose of the present paper is to prove that $ L_1(L_p) $ is primary. Our proof works equally well for real and complexvalued functions. Before we describe our work, we review in some detail the development of methods pertaining to the spaces $L_p$ and, more broadly, to rearrangementinvariant spaces.
Projections on those spaces are studied effectively alongside the Haar system and the reproducing properties of its block bases. The methods developed for proving that a particular Lebesgue space $L_p$ is primary may be divided into two basic classes, depending on whether the Haar system is an unconditional Schauder basis.
In case of unconditionality, the most flexible method goes back to the work of Alspach, Enflo and Odell [Reference Alspach, Enflo and Odell1]. For a linear operator T on $L_p $ , it yields a block basis of the Haar system $\widetilde {h}_I $ and a bounded sequence of scalars $a_I $ forming an approximate eigensystem of T such that
and $\widetilde {h}_I$ spans a complemented copy of the space $L_p $ . Thus, when restricted to $ span \widetilde {h}_I$ , the operator T acts as a bounded Haar multiplier. Since the Haar basis is unconditional, the Haar multiplier is invertible if $ a_I > \delta $ for some $\delta> 0. $
Alspach, Enflo and Odell [Reference Alspach, Enflo and Odell1] arrive at equation (1) by ensuring that for $\varepsilon _{I, J }> 0 $ sufficiently small, the following linearly ordered set of constraints holds true
where the relation $\prec $ refers to the lexicographic order on the collection of dyadic intervals. Utilising that the independent $\{1,+1\}$ valued Rademacher system $\{r_n\}$ is a weak null sequence in $L_p$ , $(1 \le p < \infty )$ , Alspach, Enflo and Odell [Reference Alspach, Enflo and Odell1] obtain, by induction along $\prec $ , the block basis $ \widetilde {h}_I$ satisfying equation (2).
The AlspachEnfloOdell method provides the basic model for the study of operators on function spaces in which the Haar system is unconditional; this applies in particular to rearrangement of invariant spaces in [Reference Johnson, Maurey, Schechtman and Tzafriri20] and [Reference Dosev, Johnson and Schechtman15].
In $L_1 $ , the Haar system is a Schauder basis but fails to be unconditional. The basic methods for proving that $L_1 $ is primary are due to Enflo via Maurey [Reference Maurey26] on the one hand and Enflo and Starbird [Reference Enflo and Starbird16] on the other hand. For operators T on $L_1 $ , the EnfloMaurey method yields a block basis of the Haar basis $\widetilde {h}_I $ and a bounded measurable function g, such that
for $ f \in \mathrm {span}\{ \widetilde {h}_I\}$ , and $\widetilde {h}_I$ spans a copy of $L_1 $ . Thus the restricted operator T acts as a bounded multiplication operator and is invertible if $ g > \delta $ for some $\delta> 0. $ The full strength of the proof by EnfloMaurey is applied to show that the representation in equation (3) holds true.
EnfloMaurey [Reference Maurey26] exhibit in their proof of equation (3) a sequence of bounded scalars $a_I $ such that
Since the Rademacher system $\{r_n\}$ is a weakly null sequence in $L_1$ , equation (4) may be obtained directly by choosing a block basis for which the constraints in equation (2) and
hold true. Remarkably, until very recently [Reference Lechner, Motakis, Müller and Schlumprecht22], eigensystem representations such as equation (4) were not exploited in the context of $L ^1 $ , where the Haar system is not unconditional.
The powerful precision of $L_1$ constructions with dyadic martingales and block basis of the Haar system is in full display in [Reference Johnson, Maurey and Schechtman19] and [Reference Talagrand36]. Johnson, Maurey and Schechtman [Reference Johnson, Maurey and Schechtman19] determined a normalised weakly null sequence in $L_1 $ such that each of its infinite subsequences contains in its span a block basis of the Haar system $\widetilde {h}_I$ , spanning a copy of $L_1. $ Thus $L_1 $ fails to satisfy the unconditional subsequence property, a problem posed by Maurey and Rosenthal [Reference Maurey and Rosenthal28]. By contrast, Talagrand [Reference Talagrand36] constructed a dyadic martingale difference sequence $g_{n,k} $ such that neither $X = \overline {\mathrm {span}\, }^{L_1} \{ g_{n,k} \}$ nor $L_1/X$ contains a copy of $L_1$ .
The investigation of complemented subspaces in Bochner Lebesgue spaces was initiated by Capon [Reference Capon12, Reference Capon11] who pushed hard to further the development of the scalar methods and proved that $L_p(X) (1\le p < \infty )$ is primary when X is a Banach space with a symmetric basis, say $(x_{k}) .$ Specifically, Capon [Reference Capon12] showed that for an operator T on $L_p(X)$ , there exists a block basis of the Haar basis $\widetilde {h}_I $ , a subsequence of the symmetric basis $(x_{k_n}) $ and a bounded measurable g such that
for $ f \in \mathrm {span} \{\widetilde {h}_I \}$ . Thus on $ \mathrm {span} \{\widetilde {h}_I \} \otimes \mathrm {span} \{x_{k_n}\}$ the operator T acts like $M_g \otimes Id $ , where $M_g $ is the multiplication operator induced by $g. $ Simultaneously, Capon shows that the tensor products form an approximate eigensystem,
where $ a_{I }$ is a bounded sequence of scalars and $\widetilde {h}_I$ spans a copy of $L_p $ .
In the mixed norm space $L_p (L_ q) $ where $ 1< q < \infty $ and $ 1< p < \infty $ , the biparameter Haar system forms an unconditional basis. Displaying extraordinary combinatorial strength, Capon [Reference Capon11] exhibited a socalled local product block basis $ k_{I \times J }$ spanning a complemented copy of $L_p (L_q ) $ such that
1.2 The present paper
Now we describe the main ideas in the approach of the present paper.
Introducing a transitive relation between operators $S,T$ on a Banach space X, we say that T is a projectional factor of S if there exist transfer operators $ A, B \colon X \to X $ such that
If merely $S = ATB$ , without the additional constraint $BA =Id_X$ , we say that T is a factor of S, or equivalently that S factors through T.
Clearly, if T is a projectional factor of S and S one of R, then T is a projectional factor of R: that is, being a projectional factor is a transitive relation. Given any operator $T : L_1(L_p) \to L_1(L_p)$ , the goal is to show that either T or $Id T$ is a factor of the identity $Id : L_1(L_p) \to L_1(L_p)$ . In section 2.1, we expand on the quantitative aspects of the transitive relation in equation (6) and the role it plays in providing a stepbystep reduction of the problem, allowing for the replacement of a given operator with a simpler one that is easier to work with.
Let $T : L_1(L_p) \to L_1(L_p)$ be a bounded linear operator. It is represented by a matrix $T = ( T^{I,J})$ of operators $ T^{I,J} : L_1 \to L_1 $ , indexed by pairs of dyadic intervals $(I,J)$ : that is, on $f \in L_1(L_p)$ with Haar expansion
the operator T acts by
Theorem 6.1, the main result of this paper, asserts that there exists a bounded operator $ T^{0} :L_1 \to L_1 $ such that
meaning there exist bounded transfer operators $A, B : L_1(L_p) \to L_1(L_p) $ such that $ B A = Id _{L_1(L_p)} $ and
The ideas involved in the proof of Theorem 6.1 are based on the interplay of topological, geometric and probabilistic principles. Specifically, we build on compact families of $L_1 $ operators, extracted from $ span \{ T^{I,J} \} $ , and large deviation estimates for empirical processes:

(a) (Compactness.) We utilise the SemenovUksusov characterisation [Reference Semenov and Uksusov34, Reference Semenov and Uksusov35] of Haar multipliers on $L_1 $ and uncover compactness properties of the operators $ T^{I,J} : L_1 \to L_1 $ . See Theorem 3.2 and Theorem 3.4.

(b) (Stabilisation.) Large deviation estimates for the empirical distribution method gave rise to a novel connection between factorisation problems on $L_ 1 (L_p) $ and the concentration of measure phenomenon. See Lemma 5.3 and Lemma 5.4.
Step 1. We say that T is a diagonal operator if $ T^{I,J} = 0$ for $ I \neq J , $ in which case we put $ T^{L} =T^{L,L}$ . The first step provides the reduction to diagonal operators. Specifically, Theorem 4.1 asserts that for any operator $T = ( T^{I,J})$ , there exists a diagonal operator $T_{\mathrm {diag}} =( T^{L} )$ such that
The reduction in equation (10) results from compactness properties for the family of $L_1 $ operators $ T^{I,J}$ established in Theorem 3.2 and Theorem 3.4. Specifically, if $ f \in L_1$ , then the set
if, moreover, $T^{I,J}$ satisfies uniform offdiagonal estimates
then, for $ \eta> 0 ,$ there exists a stopping time collection of dyadic intervals $ \mathcal {A}$ satisfying $\limsup \mathcal {A}> 1 \eta $ such that the set of operators
Recall that $ \mathcal {A} \subseteq \mathcal {D}$ is a stopping time collection if for $K, L \in \mathcal {A} $ and $ J \in \mathcal {D} $ , the assumption $ K \subset J \subset L $ implies that $ J \in \mathcal {A} .$ By Theorem 2.6, the orthogonal projection
is bounded on $L_1 $ when $ \mathcal {A} $ is a stopping time collection of dyadic intervals.
Step 2. Next we show that it suffices to prove the factorisation in equation (9) for diagonal operators satisfying uniform offdiagonal estimates. We say that $ T = (R^{L}) $ is a reduced diagonal operator if the $R^L : L_1 \to L_1 $ satisfy
Proposition 5.6 asserts that there exists a reduced diagonal operator $ T^{\mathrm {red}}_{\mathrm {diag}} = (R^{L}) $ satisfying equation (14), such that
To prove equation (15), we use the compactness properties of $T_{\mathrm {diag}} = (T^{L}) $ together with measure concentration estimates [Reference Bourgain, Lindenstrauss and Milman8, Reference Schechtman33] associated to the empirical distribution method. See Lemma 5.3 and Lemma 5.4.
Step 3. Next we show that we may replace reduced diagonal operators by stable diagonal operators. We say that $ T^{\mathrm {stbl}}_{\mathrm {diag}} = (S^{L}) $ is a stable diagonal operator if
for dyadic intervals $M, L $ satisfying $ L \subseteq M$ , we obtain in Proposition 5.2 that for any reduced diagonal operator $ T^{\mathrm {red}}_{\mathrm {diag}} $ , there exists a stable diagonal operator $ T^{\mathrm {stbl}}_{\mathrm {diag}} $ such that
We verify equation (17), exploiting again the compactness properties of $T^{\mathrm {red}}_{\mathrm {diag}} = (R^{L}) $ in tandem with the probabilistic estimates of Lemma 5.3 and Lemma 5.4.
Step 4. Proposition 6.2 provides the final step of the argument. It asserts that for any stable diagonal operator $ T^{\mathrm {stbl}}_{\mathrm {diag}} =(S^L)$ , there exists a bounded operator $ T^{0} :L_1 \to L_1 $ such that
To prove equation (18), we set up a telescoping chain of operators connecting any of the $ S^L $ to $S^{[0,1]} $ and invoke the stability estimates in equation (16) available for the operators $S^I $ when $ L \subset I \subset [0,1]. $ Thus we may finally take $T^0 = S^{[0,1]} .$
Step 5. Retracing our steps, taking into account that the notion of projectional factors forms a transitive relation, yields equation (9).
2 Preliminaries
2.1 Factors and projectional factors up to approximation
A common strategy in proving the primariness of spaces such as $L_p$ is to study the behaviour of a bounded linear operator on a $\sigma $ subalgebra on a subset of $[0,1)$ of positive measure. This process may have to be repeated several times. We introduce some language that will make this process notationally easier.
Definition 2.1. Let X be a Banach space, $T,S:X\to X$ be bounded linear operators and $C\geq 1$ , $\varepsilon \geq 0$ .

(a) We say that T is a Cfactor of S with error $\varepsilon $ if there exist $A,B:X\to X$ with $\BTAS\ \leq \varepsilon $ and $\A\\B\\leq C$ . We may also say that $S C$ factors through Twith error $\varepsilon $ .

(b) We say that T is a Cprojectional factor of S with error $\varepsilon $ if there exists a complemented subspace Y of X that is isomorphic to X with associated projection and isomorphism $P,A:X\to Y$ (i.e., $A^{1} P A$ is the identity on X), so that $\A^{1}PTA  S\ \leq \varepsilon $ and $\A\\A^{1}P\\leq C$ . We may also say that $S C$ projectionally factors through Twith error $\varepsilon $ .
When the error is $\varepsilon = 0$ , we will simply say that T is a Cfactor or Cprojectional factor of S.
Remark 2.2. If T is a Cprojectional factor of S with error $\varepsilon $ , then $IT$ is a Cprojectional factor of $IS$ with error $\varepsilon $ . Indeed, if P and A are as in Definition (b), then $PA = A$ and therefore $A^{1}P(IT)A = I  A^{1}PTA$ : that is, $\A^{1}P(IT)A  (IS)\ \leq \varepsilon $ .
In a certain sense, being an approximate factor or projectional factor is a transitive property.
Proposition 2.3. Let X be a Banach space and $R,S,T:X\to X$ be bounded linear operators.

(a) If T is a Cfactor of S with error $\varepsilon $ and S is a Dfactor of R with error $\delta $ , then T is a $CD$ factor of R with error $D\varepsilon +\delta $ .

(b) If T is a Cprojectional factor of S with error $\varepsilon $ and S is a Dprojectional factor of R with error $\delta $ , then T is a $CD$ projectional factor or R with error $D\varepsilon +\delta $ .
Proof. The first statement is straightforward, and thus we only provide a proof of the second one. Let Y and Z be complemented subspaces of X, which are isomorphic to X. Let $P:X\to Y$ and $Q:X\to Z$ be the associated projections, and $A:X\to Y$ and $B:X\to Z$ the associated isomorphisms satisfying $\A\\A^{1}P\ \leq C$ , $\B\\B^{1}Q\\leq D$ . $\A^{1}PT  S\ \leq \varepsilon $ and $\B^{1}QSB  R\ \leq \delta $ .
We define $\tilde P = AQA^{1}P$ and $\tilde A = AB$ . Then $\tilde P$ is a projection onto $\tilde A[X]$ and $\\tilde P\\\tilde A^{1}P\ \leq CD$ . We obtain
and thus $\B^{1}QA^{1}PTAB  R\ \leq D\varepsilon +\delta $ . Finally, observe that
and thus $\\tilde A^{1}\tilde PT\tilde A  R\\leq D\varepsilon + \delta $ .
The following explains the relation between primariness and approximate projectional factors.
Proposition 2.4. Let X be a Banach space that satisfies Pełczyński’s accordion property: that is, for some $1\leq p\leq \infty $ , we have that $X\simeq \ell _p(X)$ . Assume that there exist $C\geq 1$ and $0<\varepsilon <1/2$ so that every bounded linear operator $T:X\to X$ is a Cprojectional factor with error $\varepsilon $ of a scalar operator: that is, a scalar multiple of the identity. Then for every bounded linear operator $T:X\to X$ , the identity $2C/(12\varepsilon )$ factors through either T or $IT$ . In particular, X is primary.
Proof. Let Y be a subspace of X that is isomorphic to X and complemented in X, with associated projection and isomorphism $P,A:X\to Y$ , so that $\A^{1}P\\A\\leq C$ and so that there exists a scalar $\lambda $ with $\(A^{1}P)TA  \lambda I\ \leq \varepsilon $ . If $\lambda  \geq 1/2$ , then
and thus $B^{1}$ exists with $\B^{1}\ \leq 1/(12\varepsilon )$ . We obtain that if $S = B^{1}\lambda ^{1}A^{1}P$ , then $STA = I$ and $\S\\A\ \leq 2C/(12\varepsilon )$ . If, on the other hand $\lambda  <1/2$ , then, because $\A^{1}P(IT)A  (1\lambda )I\\leq \varepsilon $ , we achieve the same conclusion for $IT$ instead of T.
If $X = Y\oplus Z$ and $Q:X\to Y$ is a projection, then we deduce that either Y or Z contains a complemented subspace isomorphic to X. To see that we can assume that for some scalar $\lambda $ , with $\lambda \ge 1/2$ , Q is a Cprojectional factor with error $\varepsilon \in (0,1/2)$ of $\lambda I$ . Otherwise, we replace Q by $IQ$ . From what we have proved so far, we deduce that there are operators $S,A :X\to X$ so that $SQA = I$ . Then $W = QA(X)$ is a subspace of Y that is isomorphic to X. It is also complemented via the projection $R = (S_W)^{1}S:X\to W$ . So we obtain that Y is a complemented subspace of X and X is isomorphic to complemented subspace of Y. Since in addition X satisfies the accordion property, it follows from Pełczyński’s famous classical argument from [Reference Pełczyński30] that $X\simeq Y$ . Similarly, if $(IQ)$ is a factor of the identity, we deduce $X\simeq Z$ .
At this point, it is appropriate to point out that the above proposition applies to the space $L_1(X)$ for any Banach space X. Indeed, $L_1(X)$ is isomorphic to an $\ell _1$ sum of infinitely many copies of itself (see, e.g., [Reference Wojtaszczyk41, Example 22(a), page 44]).
2.2 The Haar system in $L_1$
We denote by $L_1$ the space of all (equivalence classes of) integrable scalar functions f with domain $[0,1)$ endowed with the norm $\f\_1 = \int _0^1f(s) ds$ . We will denote the Lebesgue measure of a measurable subset A of $[0,1)$ by $A$ .
We denote by $\mathcal {D}$ the collection of all dyadic intervals in $[0,1)$ , namely
We define the bijective function $\iota : \mathcal {D}\to \{2,3,\ldots \}$ by
The function $\iota $ defines a linear order on $\mathcal {D}$ . We recall the definition of the Haar system $(h_I)_{I\in \mathcal {D}}$ . For $I = [(i1)/2^j,i/2^j)\in \mathcal {D}$ , we define $I^+, I^\in \mathcal {D}$ as follows: $I^+ = [(i1)/2^j,(2i1)/2^{j+1})$ , $I^ = [(2i1)/2^{j+1},i/2^j)$ , and
We additionally define $h_\emptyset = \chi _{[0,1)}$ and $\mathcal {D}^+ = \mathcal {D}\cup \{\emptyset \}$ . We also define $\iota (\emptyset ) = 1$ . Then $(h_I)_{I\in \mathcal {D}^+}$ is a monotone Schauder basis of $L_1$ , with the linear order induced by $\iota $ . Henceforth, whenever we write $\sum _{I\in \mathcal {D}^+}$ , we will always mean the sum is taken with this linear order $\iota $ .
For each $n\in \mathbb {N}\cup \{0\}$ , we define
An important realisation that will be used multiple times in the sequel is the following. Let $I\in \mathcal {D}$ . Then there exist a unique $k_0 \in \mathbb {N}$ and a unique decreasing sequence of intervals $(I_k)_{k=0}^{k_0}$ in $(\mathcal {D}^+)$ so that $I_0 = \emptyset $ , $I_1 = [0,1)$ and $I_{k_0}=I$ ; and for $k=1,2, \ldots , k_{01}$ , $I_{k+1}=I^+_k$ or $I_{k+1}=I^_k$ . In other words, $(I_k)_{k=1}^{k_0}$ consists of all elements of $\mathcal D^+$ that contain I, decreasingly ordered. For $k=1,2, \ldots , k_01$ , put $\theta _k = 1$ , if $I_{k+1} = I_k^+$ and $\theta _k = 1$ if $I_{k+1} = I_k^$ . We then have the following formula, already discovered by Haar:
Note that in the above representation, if we define $I_{k_0} = I$ , then $I_k = I_{k1}^$ or $I_k = I_{k1}^+$ for $k=2,\ldots ,k_0$ . To simplify notation, we will henceforth make the convention $\theta _0 = 1$ and $I_0^{1} = \emptyset ^{1} = 1$ to be able to write
This representation will be used multiple times in this paper.
A relevant definition is that of $[\mathcal {D}^+]$ , the collection of all sequences $(I_k)_{k=0}^\infty $ in $\mathcal {D}^+$ so that $I_0 = \emptyset $ , $I_1 = [0,1)$ , and for each $k\in \mathbb {N}$ , $I_{k+1} = I_k^+$ or $I_{k+1} = I_k^$ . Note that for $(I_k)_{k=0}^\infty \in [\mathcal {D}^+]$ and $k\in \mathbb {N}$ , $I_k\in \mathcal {D}_{k1}$ . Each $(I_k)_{k=0}^\infty $ defines a sequence $(\theta _k)_{k=1}^\infty $ as described in the paragraph above. This yields a bijection between $[\mathcal {D}^+]$ and $\{1,1\}^{\mathbb {N}}$ . This fact will be used more than once. On $\{1,1\}^{\mathbb {N}}$ , we will consider the product of the uniform distribution on $\{1,1\}$ , which via this bijection generates a probability on $[\mathcal {D}^+]$ , which we will also denote by $\cdot $ . Also, we consider on $[\mathcal D^+] $ the image topology of the product of the discrete topology on $\{1,1\}$ via that bijection.
2.3 Haar multipliers on $L_1$
A Haar multiplier is a linear map D, defined on the linear span of the Haar system, for which every Haar vector $h_I$ is an eigenvector with eigenvalue $a_I$ . We denote the space of bounded Haar multipliers $D:L_1\to L_1$ by $\mathcal {L}_{HM}(L_1)$ . In this subsection, we recall a formula for the norm of a Haar multiplier that was observed by Semenov and Uksusov in [Reference Semenov and Uksusov34, Reference Semenov and Uksusov35]. We then use Haar multipliers to sketch a proof of the fact that every bounded linear operator on $L_1$ is an approximate 1projectional factor of a scalar operator.
This recent formula of Semenov and Uksusov is a very elegant characterisation of boundedness on Haar multipliers on $L_1$ . In that spirit, Girardi studied related operators on $L_p$ and $L_p(X)$ of a multiplier type [Reference Girardi17]. Wark has since simplified the proof of SemenovUksusov [Reference Wark38] as well as extended the formula to the vectorvalued case [Reference Wark39, Reference Wark40].
Proposition 2.5. Let $(I_k)_{k=0}^\infty \in [\mathcal {D}^+]$ be associated to $(\theta _k)_{k=1}^\infty \in \{1,1\}^{\mathbb {N}}$ . For $k\in \mathbb {N}$ define $B_k = I_k\setminus I_{k+1}$ , and let $(a_k)_{k=0}^n$ be a sequence of scalars.
Then we have
and for any $1\le m< n$ ,
Proof. Note that the sequence $(B_k)_{k=1}^\infty $ is a partition of $[0,1)$ , and for $k\in \mathbb {N}$ , $B_k$ is the set in $[0,1]$ of measure $2^{k}$ on which $\theta _k h_{I_k}$ takes the value $1$ . Let $f = a_0 h_\emptyset + \sum _{k=1}^n \theta _ka_kI_k^{1}h_{I_k}$ . For $k\in \mathbb {N}$ , put $b_k = a_k$ if $k\leq n$ and $b_k = 0$ otherwise. For each $k\in \mathbb {N}$ , the function f is constant on $B_k$ and in fact for $s\in B_k$ , we have
Therefore, for any $m=1,2,\ldots n$ ,
where for each $k\in \mathbb {N}$ ,
Putting $X_0 = 0$ , a calculation yields that for all $k\in \mathbb {N}$ ,
Applying the triangle inequality to equations (23) and (24), we conclude
which yields the upper bound of equation (21). The lower bound is proved with a similar computation. To obtain equation (22), we deduce from equation (24)
and therefore
which yields
and proves equation (22).
Theorem 2.6 SemenovUksusov, [Reference Semenov and Uksusov34, Reference Semenov and Uksusov35]
Let $(a_I)_{I\in \mathcal {D}^+}$ be a collection of scalars and D be the associated Haar multiplier. Define
where the supremum is taken over all $(I_k)_{k=0}^\infty \in [\mathcal {D}^+]$ . Then D is bounded (and thus extends to a bounded linear operator on $L_1(X)$ ) if and only if ${\left \vert \kern 0.25ex\left \vert \kern 0.25ex\left \vert D \right \vert \kern 0.25ex\right \vert \kern 0.25ex\right \vert }<\infty $ . More precisely,
Proof. By equation (19), D is always well defined on the linear span of the set $\mathcal {X} = \{I^{1}\chi _I:I\in \mathcal {D}\}$ . In fact, the closed convex symmetric hull of $\mathcal {X}$ is the unit ball of $L_1$ . We deduce that $\D\ = \sup \{\Df\:f\in \mathcal {X}\}$ , under the convention that $\D\ = \infty $ if and only if D is unbounded. Fix $f = I^{1}\chi _I\in \mathcal {X}$ . Use equation (19) to write
Extend $(I_k)_{k=0}^{k_0}$ to a branch $(I_k)_{k=0}^\infty $ . By equation (21), we have
By the triangle inequality, $\Df\_{L_1} \leq \sum _{k=1}^{\infty }a_{I_k}  a_{I_{k1}} + \lim _ka_{I_{k}} \leq {\left \vert \kern 0.25ex\left \vert \kern 0.25ex\left \vert D \right \vert \kern 0.25ex\right \vert \kern 0.25ex\right \vert }$ . The lower bound is achieved by taking in equation (27) all $f\in \mathcal {X}$ .
The following special type of Haar multiplier will appear in the sequel.
Example 2.7. Let $\mathscr {A}\subset [\mathcal {D}^+]$ be a nonempty set, and define the set $\mathcal {A} = \cup _{k_0=0}^\infty \{I_{k_0}:(I_k)_{k=0}^\infty \in \mathscr {A}\}\subset \mathcal {D}^+$ . Let $P_{\mathscr {A}}$ denote the Haar multiplier that has entries $a_I = 1$ for $I\in \mathcal {A}$ and $a_I = 0$ otherwise. Then by Theorem 2.6, $\P_{\mathscr {A}}\\leq {\left \vert \kern 0.25ex\left \vert \kern 0.25ex\left \vert P_{\mathscr {A}} \right \vert \kern 0.25ex\right \vert \kern 0.25ex\right \vert } = 1$ , and therefore $P_{\mathscr {A}}$ defines a normone projection onto $Y_{\mathscr {A}} = \overline {\langle \{h_I:I\in \mathcal {A}\}\rangle }$ .
The following elementary remark will be useful eventually.
Remark 2.8. Let $\mathscr {A}$ be a nonempty closed subset of $[\mathcal {D}^+]$ and $\mathcal {A} = \cup _{k_0=0}^\infty \{I_{k_0}:(I_k)_{k=0}^\infty \in \mathscr {A}\}$ . Let D be a Haar multiplier with entries that are zero outside $\mathcal {A}$ . Then
Haar multipliers provide a short path to a proof of the fact that every operator on $L_1$ is an approximate 1projectional factor of a scalar operator, which in turn yields Enflo’s theorem [Reference Maurey26] that $L_1$ is primary.
Theorem 2.9. The following are true in the space $L_1$ .

(i) Let $D:L_1\to L_1$ be a bounded Haar multiplier. For every $\varepsilon>0$ , D is a 1projectional factor with error $\varepsilon $ of a scalar operator.

(ii) Let $T:L_1\to L_1$ be a bounded linear operator. For every $\varepsilon>0$ , T is a 1projectional factor with error $\varepsilon $ of a bounded Haar multiplier $D:L_1\to L_1$ .
In particular, for every $\varepsilon>0$ , every bounded linear operator $T:L_1\to L_1$ is a 1projectional factor with error $\varepsilon $ of a scalar operator.
We wish to provide a sketch of the proof of the above. First, we will use it at the end of the paper; and second, it provides an introduction to the basis of the methods used in the paper. Now, and numerous times in the sequel, we require the following notation and definition.
Notation. For every disjoint collection $\Delta $ of $\mathcal {D}^+$ and $\theta \in \{1,1\}^\Delta $ , we denote $h_\Delta ^\theta = \sum _{J\in \Delta }\theta _Jh_J$ . If $\theta _J = 1$ for all $J\in \Delta $ , we write $h_\Delta = \sum _{j\in \Delta }h_J$ . For a finite disjoint collection $\Delta $ of $\mathcal {D}$ , we denote $\Delta ^* = \cup \{I:I\in \Delta \}$ .
Definition 2.10. A faithful Haar system is a collection $(\tilde h_I)_{I\in \mathcal {D}^+}$ so that for each $I\in \mathcal {D}^+$ , the function $\tilde h_I$ is of the form $\tilde h_I = h_{\Delta _I}^{\theta _I}$ , for some finite disjoint collection $\Delta _I$ of $\mathcal {D}$ , and so that

(i) $\Delta _\emptyset ^* = \Delta _{[0,1)}^* = [0,1)$ , and for each $I\in \mathcal {D}$ , we have $\Delta _I = I$ ,

(ii) for every $I\in \mathcal {D}$ , we have that $\Delta _{I^+}^* = [\tilde h_\emptyset \tilde h_I = 1]$ and $\Delta _{I^}^* = [\tilde h_\emptyset \tilde h_I = 1]$ .
Remark 2.11. It is immediate that $(\tilde h_\emptyset \tilde h_I)_{I\in \mathcal {D}^+}$ is distributionally equivalent to $(h_I)_{I\in \mathcal {D}^+}$ . Therefore, $(\tilde h_I)_{I\in \mathcal {D}^+}$ is isometrically equivalent to $(h_I)_{I\in \mathcal {D}^+}$ , both in $L_1$ and in $L_\infty $ . In particular,
defines a normone projection onto a subspace Z of $L_1$ that is isometrically isomorphic to $L_1$ . Note that unless $h_\emptyset = 1$ , P is not a conditional expectation as $P\chi _{[0,1)} = 0$ . Instead, it is of the form $Pf = \tilde h_\emptyset E(\tilde h_\emptyset f\Sigma )$ , where $\Sigma = \sigma (\tilde h_\emptyset \tilde h_I)_{I\in \mathcal {D}^+}$ . Since $\tilde h_\emptyset $ is not $\Sigma $ measurable, it cannot be eliminated. The advantage of the notion of a faithful Haar system is that one can be constructed in every tail of the Haar system. The drawback is that it causes a slight notational burden when having to adjust for the initial function $\tilde h_\emptyset $ in several situations.
We will several times recursively construct faithful Haar systems $(\tilde h_I)_{I\in \mathcal {D}^+}$ , which means we first choose $\tilde h_\emptyset $ , second $\tilde h_{[0,1)}$ and then $\tilde h_I$ , $I\in \mathcal D$ , assuming that $\tilde h_J$ was chosen for all $J\in \mathcal D ^+$ with $\iota (J) < \iota (I)$ .
Proof of Theorem 2.9
Let us sketch the proof of the first statement. Let $(a_I)_{I\in \mathcal {D}^+}$ be the entries of D. For every $I\in \mathcal {D}$ , denote by $Q_I$ the Haar multiplier that has entries 1 for all $J\subset I$ and zero for all others. Then ${\left \vert \kern 0.25ex\left \vert \kern 0.25ex\left \vert Q_I \right \vert \kern 0.25ex\right \vert \kern 0.25ex\right \vert } = 1$ . First, note that for every $\varepsilon>0$ , there exists $I_0\in \mathcal {D}^+$ so that ${\left \vert \kern 0.25ex\left \vert \kern 0.25ex\left \vert DQ_{I_0}  a_{I_0}Q_{I_0} \right \vert \kern 0.25ex\right \vert \kern 0.25ex\right \vert }\leq \varepsilon $ . Otherwise, we could easily deduce ${\left \vert \kern 0.25ex\left \vert \kern 0.25ex\left \vert D \right \vert \kern 0.25ex\right \vert \kern 0.25ex\right \vert } = \infty $ . Construct a dilated and renormalised faithful Haar system $(\tilde h_I)_{I\in \mathcal {D}^+}$ with closed linear span Z in the range of $Q_{I_0}$ , and let $P:L_1\to Z$ be the corresponding normone projection and $A:L_1\to Z$ be an onto isometry. Then $\A^{1}PDA  a_{I_0}I\ \leq \varepsilon $ .
For the second part, we will use that the Rademacher sequence $(r_n)_n$ (i.e., $r_n = \sum _{L\in \mathcal {D}_n}h_L$ , for $n\in \mathbb {N}$ )) is weakly null in $L_1$ and $w^*$ null in $(L_1)^* \equiv L_\infty $ . Using this fact, we inductively construct a faithful Haar system $(\tilde h_I)_{I\in \mathcal {D}^+}$ so that for each $I\neq J$ , we have
where $(\varepsilon _{(I,J)})_{(I,J)\in \mathcal {D}^+}$ is a prechosen collection of positive real numbers with $\sum \varepsilon _{(I,J)} \leq \varepsilon $ . This is done as follows. If we have chosen $\tilde h_I$ for $\iota (I) = 1,\ldots ,k1$ . Let $I\in \mathcal {D}^+$ with $\iota (I) = k$ , and let $I_0$ be the predecessor of I: that is, either $I = I_0^+$ or $I = I_0^$ . Let us assume $I = I_0^+$ . We then choose the next function $\tilde h_I$ among the terms of a Rademacher sequence with support $[\tilde h_\emptyset \tilde h_{I_0} = 1]$ . Denote by Z the closed linear span of $(\tilde h_I)_{I\in \mathcal {D}^+}$ , and take the canonical projection $P:L_1\to Z$ as well as the onto isometry $A:L_1\to Z$ given by $A h_I = \tilde h_I$ . Consider the operator $S=A^{1}PTA:L_1\to L_1$ , and note that for all $I\neq J$ , we have $\langle h_I, S(J^{1}h_J) \rangle  = \langle \tilde h_I, T(J^{1}\tilde h_J )\rangle  \leq \varepsilon _{(I,J)}$ . It follows that the entries $a_I = \langle h_I, S(I^{1}h_I )\rangle $ define a bounded Haar multiplier D and $\S  D\ \leq \varepsilon $ : that is, T is a 1projectional factor with error $\varepsilon $ of D.
2.4 Haar system spaces
We define Haar system spaces. These are Banach spaces of scalar function generated by the Haar system in which two functions with the same distribution have the same norm. This abstraction does not impose any notational burden on the proof of the main result. The only difference to the case $X= L_p$ is the normalisation of the Haar basis. Properties such as unconditionality of the Haar system and reflexivity of $L_p$ are never deployed.
Definition 2.12. A Haar system space X is the completion of $Z = \langle \{ h_L:L\in \mathcal {D}^+\}\rangle = \langle \{\chi _I:I\in \mathcal {D}\}\rangle $ under a norm $\\cdot \$ that satisfies the following properties:

(i) If f, g are in Z and $f$ , $g$ have the same distribution, then $\f\ = \g\$ .

(ii) $\\chi _{[0,1)}\ = 1$ .
We denote the class of Haar system spaces by $\mathcal H$ .
Obviously, property (ii) may be achieved by scaling the norm of a space that satisfies (i). We include it anyway for notational convenience.
An important class of spaces that satisfy Definition 2.12, according to [Reference Lindenstrauss and Tzafriri25, Proposition 2.c.1], are separable rearrangementinvariant function spaces on $[0,1]$ . Recall that a (nonzero) Banach space Y of measurable scalar functions on $[0,1)$ is called rearrangement invariant (or, as in [Reference Rodin and Semyonov31], symmetric) if the following conditions hold true. First, whenever $f\in Y$ and g is a measurable function with $g\leq f$ a.e., then $g\in Y$ and $\g\_Y \leq \f\_Y$ . Second, if $u,v$ are in Y and they have the same distribution, then $\u\_Y = \v\_Y$ .
The following properties of a Haar system space X follow from elementary arguments. For completeness, we provide the proofs.
Proposition 2.13. Let X be a Haar system space.

(a) For every $f\in Z = \langle \{\chi _I:I\in \mathcal {D}\}\rangle $ , we have $\f\_{L_1}\leq \f\\leq \f\_{L_\infty }$ . Therefore, X can be naturally identified with a space of measurable scalar functions on $[0,1)$ and ${\overline Z^{\\cdot \_{L_\infty }}} \subset X \subset {L_1}$ .

(b) $Z = \langle \{\chi _I:I\in \mathcal {D}\}\rangle $ naturally coincides with a subspace of $X^*$ , and its closure $\overline Z$ in $X^*$ is also a Haar system space.

(c) The Haar system, in the usual linear order, is a monotone Schauder basis of X.

(d) For a finite union A of elements of $\mathcal {D}$ , we put $\mu _A = \\chi _A\^{1}_X$ and $\nu _A = \\chi _A\^{1}_{X^*}$ . Then $\mu _A\nu _A = A^{1}$ . In particular, $(\nu _Lh_L,\mu _Lh_L)_{L\in \mathcal {D}^+}$ is a biorthogonal system in $X^*\times X$ .

(e) A faithful Haar system $(\widehat h_L)_{L\in \mathcal {D}^+}$ is isometrically equivalent to $(h_L)_{L\in \mathcal {D}^+}$ . In particular, $Pf = \sum _{L\in \mathcal {D}^+}\langle \nu _L\widehat h_L,f \rangle \mu _L\widehat h_L$ defines a normone projection onto a subspace of X that is isometrically isomorphic to X.
Proof. By the first condition in Definition 2.12, we have
for all $n\in \mathbb {N}$ , all permutations $\pi $ on $\mathcal D_n$ and all scalar families $(a_I:I\in \mathcal D_n)$ .
To show the first inequality in (a), let $n\in \mathbb {N}$ , $f=\sum _{I\in \mathcal D_n} a_I \chi _I\in Z$ , and let $\pi :\mathcal D_n\to \mathcal D_n $ , be cyclic (i.e., $\{\pi ^r(I):r=1,2\ldots ,2^n\}=\mathcal D_n$ for $I\in \mathcal D_n$ ). Then
The second inequality in (a) follows from the observation that for each $n\in \mathbb {N}$ , the family $(\chi _I: I \in \mathcal D_n)$ is $1$ unconditional.
We identify each $g\in Z$ with the bounded functional $x^*_g$ , defined by $x^*_g(f)=\int _0^1 f g$ , and we denote the dual norm by $\\cdot \_{*}$ . From this representation it is clear that $\\cdot \_*$ also satisfies the first condition in Definition 2.12. Since $\1_{[0,1)]}\=1$ , and since for all $f\in Z$ , $ \int f \le \f\_1\le \f\$ , we deduce that the second condition in Definition 2.12 holds true for the norm $\\cdot \_*$ .
Let $(h_n)$ be the Haar basis linearly ordered in the usual way, meaning that if $m<n$ , then either $\mathrm {supp}(h_n)\subset \mathrm {supp}(h_m)$ or $\mathrm {supp}(h_n)\cap \mathrm {supp}(h_m)=\emptyset $ . The claim of condition (c) follows from the fact that if $f=\sum _{j=1}^n a_j h_j\in Z$ , then for any scalar $a_{n+1}$ , the absolute values of the functions $f+a_{n+1} h_{n+1}$ and $fa_{n+1} h_{n+1}$ have the same distribution and their average is f.
Let $n\!\in \!\mathbb {N}$ and $I\!\in \!\mathcal D_n$ using for $k\!>\!n$ cyclic permutations on $\{ J\!\in \!\mathcal D_k, J\!\subset \!I\}$ . We deduce that $\sup _{f\in Z,\f\\le 1} \int _I f$ is attained for $f=\chi _I/\\chi _I\$ and thus $\\chi _I\\cdot \\chi _I\_*=2^{n}$ . Since, second, for each n, $(\chi _I: I \in \mathcal D_n)$ is an orthogonal family, we deduce (d).
Since faithful Haar systems have the same joint distribution, we deduce the first part of (e). Since by (b), this is also true with respect to the dual norm, we deduce the second part of (e).
In different parts of the proof, we will require additional properties of Haar system spaces. The following class of Haar system spaces is the one for which we prove our main theorem.
Definition 2.14. We define $\mathcal H^*$ as the class of all Banach spaces X in $\mathcal H$ satisfying

(II) the Rademacher sequence $(r_n)_n$ is not equivalent to the $\ell _1$ unit vector basis.
We define $\mathcal H^{**}$ as the class of all Banach spaces X in $\mathcal H$ satisfying

(II) no subsequence of the Xnormalised Haar system $(\mu _Lh_L)_{L\in \mathcal {D}^+}$ is equivalent to the $\ell _1$ unit vector basis.
Remark 2.15. Examples of Haar system spaces that satisfy ( $\star $ ) and ( $\star \star $ ) are separable reflexive r.i. spaces.
We note and will use several times that ( $\star $ ) for Haar system spaces is equivalent, with the condition that the Rademacher sequence $(r_n)$ is weakly null. To see this, first note that for any $(a_n)\in c_{00}$ , any $\sigma =(\sigma _n)\subset \{\pm 1\}$ and permutation $\pi $ on $\mathbb {N}$ , the distribution of $\sum _{n\in \mathbb {N}} a_n\sigma _n r_{\pi (n)}$ , does not depend on $\sigma $ on $\pi $ . It follows that $(r_n)$ is a symmetric basic sequence in X. This implies that either $r_n$ is equivalent to the $\ell _1$ unit vector basis or it is weakly null in X. Indeed, if it is not equivalent to the unit vector basis of $\ell _1$ , and by symmetry no subsequence is equivalent to the $\ell _1$ unit vector basis, it must by Rosenthal’s $\ell _1$ Theorem have weakly Cauchy subsequence; and thus, for some subsequence $(n_k)\subset \mathbb {N}$ , the sequence $(r_{n_{2k}} r_{n_{2k1}}:k\in \mathbb {N})$ is weakly null. But then the sequence $(r_{n_{2k}}+ r_{n_{2k1}}:k\in \mathbb {N})$ is also weakly null, and thus $r_{n_{2k}}$ is weakly null and by symmetry $(r_n)$ is weakly null.
2.5 Complemented subspaces of ${L_1(X)}$ isomorphic to $L_1(X)$
Let E, F be Banach spaces. The projective tensor product of E and F is the completion of the algebraic tensor product $E\otimes F$ under the norm
It is well known and follows from the definition of BochnerLebesque spaces that for any Banach space X, $L_1\otimes _\pi X\equiv L_1(X)$ via the identification $(f\otimes x)(s) = f(s)x$ . Then $L_\infty (X^*)$ canonically embeds into $(L_1(X))^*$ via the identification $\langle u,v\rangle = \int _0^1 \langle u(s),v(s)\rangle ds$ . Recall that by the definition of tensor norms, the projective tensor norm satisfies the following property we will use.

(•) For any pair of bounded linear operators $T:E\to E$ and $S:F\to F$ , there exists a unique bounded linear operator $T\otimes S:E\otimes _\pi F\to E\otimes _\pi F$ with $(T\otimes S)(e\otimes f) = (Te)\otimes (Sf)$ and $\T\otimes S\ = \T\\S\$ .
The next standard statement explains one of the main features of the projective tensor product. For the sake of completeness, and because it is essential in this paper, we include the proof.
Proposition 2.16. Let Z be a subspace of $L_1$ that is isometrically isomorphic to $L_1$ via $A:L_1\to Z$ and 1complemented in $L_1$ via $P:L_1\to Z$ . Let X be a Banach space, and let W be a subspace of X that is isometrically isomorphic to X via $B:X\to W$ and $1$ complemented in X via $Q:X\to W$ .
Then the space $Z(W) =\overline {Z\otimes W}^{L_1(X)}$ coincides with $Z\otimes _\pi W$ and is isometrically isomorphic to $L_1(X)$ via $A\otimes B:L_1(X)\to Z(W)$ and 1complemented in $L_1(X)$ via $P\otimes Q:L_1(X)\to Z(W)$ .
Proof. It is immediate that $P\otimes Q$ is a normone projection onto $Z(W)$ and that $A\otimes B$ is a normone map with dense image. It also follows that $A\otimes B$ is 11 on $L_1\otimes X$ . One way to see this is to identify $L_1\otimes X$ and $Z\otimes W$ with spaces of bilinear forms on $(L_1)^*\times X^*$ and $Z^*\times W^*$ , respectively. To conclude that $A\otimes B$ is an isometry and that $Z(W)=Z\otimes _{\pi } W$ , take u in $L_1\otimes X$ . Note that $v:=(A\otimes B)(u)$ is in $Z\otimes W\subset L_1\otimes X$ , and write $v = \sum _{i=1}^nf_i\otimes x_i$ , where $f_1,\ldots ,f_n\in L_1$ and $x_1,\ldots ,x_n\in X$ . We will see that $\sum _{i=1}^n\f_i\\x_i\ \geq \u\$ , which will imply the conclusion by the definition of $\v\$ . Indeed, $v = (P\otimes Q)(v) = \sum _{i=1}^n(Pf_i)\otimes (Qx_i)$ and
It is immediate that $(A\otimes B)(y) = v$ and thus $y = u$ .
The following standard example will be used often to define projectional factors of an operator $T:L_1(X)\to L_1(X)$ .
Example 2.17. Let $(\widetilde h_I)_{I\in \mathcal {D}^+}$ , $(\widehat h_L)_{L\in \mathcal {D}^+}$ be a faithful Haar systems, and let X be a Haar system space. Take
Then the map $P:L_1(X)\to L_1(X)$ given by
(recall that $\mu _I=\\chi _I\^{1}_X$ and $\nu _L=\\chi _L\_{X^*}^{1}$ ) is a normone projection onto $Z(X) = \overline {\langle \widetilde h_I\otimes \widehat h_L: I,L\in \mathcal {D}^+ \rangle }$ , and the map
is a linear isometry onto $Z(X)$ . Then any bounded linear operator $T:L_1(X)\to L_1(X)$ is a 1projectional factor of $S = A^{1}PTA:L_1(X)\to L_1(X)$ , so that for all $I,J,L,M\in \mathcal {D}^+$ , we have
Proposition 2.18. Let $\mathscr {A}\subset [\mathcal {D}^+]$ be a subset that has positive measure. Denote by $\mathcal {A} = \cup _{k_0=0}^\infty \{I_{k_0}:(I_k)_{k=0}^\infty \in \mathscr {A}\}$ and $Y_{\mathscr {A}} = \overline {\langle \{h_I:I\in \mathcal {A}\}\rangle }$ . Then there exists a subspace Z of $Y_{\mathscr {A}}$ , which is isometrically isomorphic to $L_1$ and $1$ complemented in $L_1$ .
Proof. By approximating $\mathscr {A}$ in measure by closed sets from the inside, we can assume that $\mathscr {A}$ is closed. For $k\in \mathbb {N}$ , let $A_k = \cup \{I:I\in \mathcal {A}\cap \mathcal {D}_k\}$ and
that is, $\mathscr {A}_k$ is the set of all sequences $(I_n)_{n=0}^\infty $ in $[\mathcal D^+]$ such that the k’th entry $I_k$ is a subset of $A_k$ . Then it follows that $\mathscr {A}=\bigcap _k \mathscr {A}_k$ , and letting $A = \cap _k A_k$ , we deduce that
But also, for any $J\notin \mathcal {A}$ , we have $J\cap A=\emptyset $ . It follows that for any $f\in L_1$ with $f_{A^c} = 0$ and $J\notin \mathcal {A}$ , we have $\langle h_J, f\rangle = 0$ and thus $f\in Y_{\mathscr {A}}$ . In particular, the restriction operator $R_A:L_1\to L_1$ is a 1projection onto a subspace that is isometrically isomorphic to $L_1$ .
The above proposition leads to the following example, which will be useful in the sequel.
Example 2.19. Let $\mathscr {A}\subset [\mathcal {D}^+]$ be a subset that has positive measure, and let X be a Banach space. Then there exists a subspace Z of $Y_{\mathscr {A}}$ that is isometrically isomorphic to $L_1$ via $A:L_1\to Z$ and 1complemented in $L_1$ via $P:L_1\to Z$ . In particular, for any Banach space X, the space
is isometrically isomorphic to $L_1(X)$ via $A\otimes I:L_1(X)\to Z(X)$ and 1complemented in $L_1(X)$ via $P\otimes I$ .
2.6 Decompositions of operators on $L_1(X)$
We begin by listing further standard facts about projective tensor products. We then use these facts to associate to each bounded linear operator $T\!:\!L_1(X)\to L_1(X)$ a family of bounded linear operators on $L_1$ . In the next section, we will study the compactness properties of this family. In later sections, we use these properties to extract information about projectional factors of the operator T.
Let E, F be Banach spaces.

(a) For every $e_0^*\in E^*$ and $f_0^*\in F^*$ , we may define the bounded linear maps $q_{(e_0^*)}:E\otimes _\pi F\to F$ and $q^{(f_0^*)}:E\otimes _\pi F\to E$ given by $q_{(e_0^*)}(e\otimes f) = e_0^*(e)f$ and $q^{(f_0^*)}(e\otimes f) = f_0^*(f)e$ . Then $\q_{(e_0^*)}\ = \e_0^*\$ and $\q^{(f_0^*)}\ = \f_0^*\$ .

(b) For every $e_0\in E$ and $f_0\in F$ , we may define the maps $j_{(e_0)}:F\to E\otimes _\pi F$ and $j^{(f_0)}:E\to E\otimes _\pi F$ given by $j_{(e_0)}f = e_0\otimes f$ and $j^{(f_0)}e = e\otimes f_0$ . Then $\j_{(e_0)}\ = \e_0\$ and $\j^{(f_0)}\ = \f_0\$ .

(c) For every bounded linear operator $T:E\otimes _\pi F\to E\otimes _\pi F$ , $f_0^*\in F^*$ , and $f_0\in F$ , the map $T^{(f_0^*,f_0)}:=q^{(f_0^*)}Tj^{(f_0)}:E\to E$ is the unique bounded linear map so that for all $e^*\in E^*$ and $e\in E$ , we have $\langle e^*, T^{(f_0^*,f_0)}e\rangle = \langle e^*\otimes f_0^*, T(e\otimes f_0)\rangle $ .

(d) For every bounded linear operator $T:E\otimes _\pi F\to E\otimes _\pi F$ , $e_0^*\in E^*$ and $e_0\in E$ , the map $T_{(e_0^*,e_0)}:=q_{(e_0^*)}Tj_{(e_0)}:F\to F$ is the unique bounded linear map so that for all $f^*\in F^*$ and $f\in F$ , we have $\langle f^*, T_{(e_0^*,e_0)}f\rangle = \langle e_0^*\otimes f^*, T(e_0\otimes f)\rangle $ .
Notation. Let X be a Haar system space. For $L\in \mathcal {D}^+$ , we denote

(i) $q^L = q^{(\nu _Lh_L)}:L_1(X)\to L_1$ ,

(ii) $j^L = j^{(\mu _Lh_L)}:L_1\to L_1(X)$ , and

(iii) $P^L = j^Lq^L:L_1(X)\to L_1(X)$ .
Note that for any $k\in \mathbb {N}$ , $\\sum _{\{L:\iota (L)\leq k\}}P^L\ = 1$ . This is because this operator coincides with $I\otimes P^{[\iota \leq k]}$ , where $P^{[\iota \leq k]}:X\to X$ is the basis projection onto $(\mu _Lh_L)_{\iota (L)\leq k}$ (this is easy to verify on vectors of the form $u =h_I\otimes h_L$ whose linear span is dense in $L_1(X)$ ). We may therefore state the following.
Remark 2.20. Let X be a Haar system space.

(i) For each $L\in \mathcal {D}^+$ , $P^L$ is a projection with image
$$\begin{align*}Y^L = \{f\otimes (\mu_Lh_L): f\in L_1\}\end{align*}$$that is isometrically isomorphic to $L_1$ . 
(ii) $(Y^L)_{L\in \mathcal {D}^+}$ forms a monotone Schauder decomposition of $L_1(X)$ . In particular, for every $u\in L_1(X)$ ,
$$\begin{align*}u = \sum_{L\in\mathcal{D}^+} P^Lu =\sum_{L\in\mathcal{D}^+} (q^Lu)\otimes (\mu_Lh_L).\end{align*}$$Thus, u admits a unique representation $u = \sum _{L\in \mathcal {D}^+}f_L\otimes (\mu _Lh_L)$ .
2.7 Operators on $L_1$ associated to an operator on $L_1(X)$
For a Haar system space X, we represent every bounded linear operator $T:L_1(X)\to L_1(X)$ as a matrix of operators $(T^{(L,M)})_{(L,M)\in \mathcal {D}^+}$ , each of which is defined on $L_1$ .
Notation. Let X be a Haar system space, and let $T:L_1(X)\to L_1(X)$ be a bounded linear operator. For $L,M\in \mathcal {D}^+$ , we denote $T^{(L,M)} = T^{(\nu _Lh_L,\mu _Mh_M)}$ (recall from Proposition 2.13 that the scalars $\mu _M$ and $\nu _L$ are positive and chosen so that $\mu _Mh_M$ is normalised in $X^*$ and $\nu _L h_L$ is normalised in X), so that for every $u\in L_1(X)$ , we have
For $L\in \mathcal {D}^+$ , we denote $T^L = T^{(L,L)}$ .
The following type of operator is essential as it is easier to work with. A significant part of the paper shows that within the constraints of the problem under consideration, every operator $T:L_1(X)\to L_1(X)$ is a 1projectional factor with error $\varepsilon $ of an Xdiagonal operator.
Definition 2.21. Let X be a Haar system space. A bounded linear operator $T:L_1(X)\to L_1(X)$ is called Xdiagonal if for all $L\neq M\in \mathcal {D}^+$ , $T^{(L,M)} = 0$ . We then call $(T^L)_{L\in \mathcal {D}^+}$ the entries of T.
Note that T is Xdiagonal if and only if for all $f\in L_1$ and $L\in \mathcal {D}^+$ , we have $T(f\otimes (\mu _Lh_L)) = (T^Lf)\otimes (\mu _Lh_L)$ if and only if for all $L\in \mathcal {D}^+$ , the space $Y^L$ is Tinvariant.
Remark 2.22. If X is a Haar system space and $T:L_1(X)\to L_1(X)$ is a bounded linear operator so that $\sum _{L\neq M}\T^{(L,M)}\ = \varepsilon <\infty $ , then equation (29) yields that there exists an Xdiagonal operator $\bar {T}:L_1(X)\to L_1(X)$ with entries $(T^L)_{L\in \mathcal {D}^+}$ so that $\T  \bar {T}\ \leq \varepsilon $ .
3 Compactness properties of families of operators
In this section, we extract compactness properties of families of operators associated to a $T:L_1(X)\to L_1(X)$ . These results will eventually be applied to families that resemble ones of the form $(T^{(L,M)})_{(L,M)\in \mathcal {D}^+}$ . The achieved compactness will be used later in a regularisation process that will allow us to extract ‘nicer’ operators that projectionally factor through T. We have chosen to present this section in a more abstract setting that permits more elegant statements and proofs.
3.1 WOTsequentially compact families
Taking WOTlimits of certain sequences of operators of the form $T^{(x^*,x)}$ is an important component of the proof. This element was already present in the approach of Capon [Reference Capon11, Reference Capon12].
This essential Lemma due to Rosenthal is necessary in this subsection as well as the next one. A proof can be given, for example, by induction on j for $\varepsilon = 2^{j}\sup _n\\xi _n\_1$ .
Lemma 3.1 [Reference Rosenthal32, Lemma 1.1]
Let $(\xi _n)_n$ be a bounded sequence of elements of $\ell _1$ and $\varepsilon>0$ . Then there exists an infinite set $N = \{n_j:j\in \mathbb {N}\}\in [\mathbb {N}]^\infty $ so that for every $j_0\in \mathbb {N}$ , we have $\sum _{j\neq j_0}\xi _{n_{j_0}}(n_j) \leq \varepsilon $ .
Here, WOT stands for the weak operator topology in $L_1(X)$ .
Theorem 3.2. Let X be a Banach space, $T:L_1(X)\to L_1(X)$ be a bounded linear operator and A, B be bounded subsets of $X^*$ and X, respectively. Assume that B contains no sequence that is equivalent to the unit vector basis of $\ell _1$ . Then for every $f\in L_1$ , the set
is a uniformly integrable (and thus weakly relatively compact) subset of $L_1$ . In particular, every sequence in $\{T^{(x^*,x)}: (x^*,x)\in A\times B\}$ has a WOTconvergent subsequence.
Proof. The ‘in particular’ part follows from the separability of $L_1$ and the fact that the set in question if bounded by $\T\\sup _{(x^*,x)\in A\times B}\x^*\\x\$ .
Fix a sequence $(x_n^*,x_n)\in A\times B$ . Assume that $(T^{(x_n^*,x_n)}f)_n$ is not uniformly integrable. Then after passing to a subsequence, there exist $\delta>0$ and a sequence of disjoint measurable subsets $(A_n)_n$ of $[0,1)$ so that for all $n\in \mathbb {N}$ , we have
For every $n\in \mathbb {N}$ , define the scalar sequence $\xi _n = (\xi _n(m))_m$ given by $\xi _n(m) = \langle \chi _{A_m}\otimes x_m^*,T(f\otimes x_n)\rangle $ . Then for every $m_0\in \mathbb {N}$ , we have that for appropriate scalars $(\zeta _m)_{m=1}^N$ of modulus one,
By Rosenthal’s Lemma 3.1, there exists an infinite subset $N = \{n_j:j\in \mathbb {N}\}$ of $\mathbb {N}$ so that for all $i_0\in \mathbb {N}$ , we have $\sum _{j\neq i_0}\xi _{n_{i_0}}(n_j) \leq \delta /2$ . After relabelling, for all $n_0\in \mathbb {N}$ , we have
We now show that $(x_n)_n$ is equivalent to the unit vector basis of $\ell _1$ . Fix scalars $a_1,\ldots ,a_N$ . For appropriate scalars $\theta _1,\ldots ,\theta _N$ of modulus 1, we have
Put
Also,
Thus, $\\sum _{n=1}^Na_nx_n\ \geq c\sum _{n=1}^Na_n$ , where $c = \delta /(2\T\\f\\sup _{x^*\in A}\x^*\)$ .
3.2 Compactness in operator norm
We discuss families that are uniformly eventually close to multipliers and how to obtain compact sets from them. This is particularly important in the sequel because compactness will be essential in achieving strong stabilisation properties of operators $T:L_1(X)\to L_1(X)$ .
Notation. For $n\in \mathbb {N}$ , we denote by $P_{(\leq n)}:L_1\to L_1$ the normone canonical basis projection onto $\langle \{h_I:I\in \mathcal {D}^n\}\rangle $ . We also denote $P_{(>n)} = I  P_{(\leq n)}$ .
Definition 3.3. A set $\mathscr {T}$ of bounded linear operators on $L_1$ is called uniformly eventually close to Haar multipliers if there exists a collection $(D_T)_{T\in \mathscr {T}}$ in $\mathcal {L}_{HM}(L_1)$ so that
The main result of this subsection is the first one in the paper that requires a certain amount of legwork.
Theorem 3.4 Fundamental Lemma
Let X be a Banach space, A, B be bounded subsets of $X^*$ and X, respectively, and $C\subset A\times B$ . Let $T:L_1(X)\to L_1(X)$ be a bounded linear operator, and assume the following:

(i) The set B contains no sequence that is equivalent to the unit vector basis of $\ell _1$ .

(ii) The set $\{T^{(x^*,x)}:(x^*,x)\in C\}$ is uniformly eventually close to Haar multipliers.
Then for every $\eta>0$ , there exists a closed subset $\mathscr {A}$ of $[\mathcal {D}^+]$ with $\mathscr {A}>1\eta $ so that the set $\{T^{(x^*,x)}P_{\mathscr {A}}:(x^*,x)\in C\}$ is relatively compact in the operator norm topology.
Remark 3.5. It is not hard to see that the unit ball of $\mathcal {L}_{HM}(L_1)$ is a compact set in the strong operator topology of $L_1$ . In fact, this is the $w^*$ topology inherited by a predual of $\mathcal {L}_{HM}(L_1)$ , namely Rosenthal’s Stopping Time space studied by Bang and Odell in [Reference Bang and Odell4, Reference Bang and Odell5], by Dew in [Reference Dew14] and by Apatsidis in [Reference Apatsidis2]. The Fundamental Lemma (Theorem 3.4) states that under the right conditions, strong operator convergence yields convergence in operator norm on a big subspace of $L_1$ . Therefore, this is a type of Egorov Theorem. We point out that some restriction on the family of operators is necessary for the conclusion to hold. If one takes, for example, $D_n = P_{(\leq n)}$ , then this converges to I in the strong operator topology. But for no nonempty set of branches $\mathscr {A}$ , the set $\{D_nP_{\mathscr {A}}:n\in \mathbb {N}\}$ is relatively compact in the operator norm topology.
Lemma 3.6. Let $r>0$ , $(I_k)_{k=0}^\infty \in [\mathcal {D}^+]$ associated to $(\theta _k)_{k=1}^\infty \in \{1,1\}^{\mathbb {N}}$ and $(a_k^n)_{(k,n)\in (\{0\}\cup \mathbb {N})\times \mathbb {N}}$ be a collection of scalars. Assume that there exist $k_1<\ell _1<k_2<\ell _2<\cdots $ so that for each $n\in \mathbb {N}$ , we have
For every $\ell ,n\in \mathbb {N}$ , define $f_n^\ell = \sum _{k=0}^\ell a_k^n\theta _kI_k^{1}h_{I_k}$ . Then there exists a strictly increasing sequence of disjoint measurable subsets $(A_n)_n$ of $[0,1)$ so that for all $n\in \mathbb {N}$ and $\ell \geq \ell _n$ , we have
Proof. Let $(B_k)$ be the partition of $[0,1)$ defined by $B_k=I_k\setminus I_{k+1}$ , $k\in \mathbb {N}$ . We conclude from the inequality in equation (22) in Proposition 2.5 that:

(i) for every $k \leq \ell _n \leq \ell \in \mathbb {N}$ and $s\in B_k$ , we have $f_n^\ell (s) = f^{\ell _n}_n(s)$ and

(ii) for every $m\leq \ell _n\in \mathbb {N}$ , we have
$$\begin{align*}\int_{\cup_{k=m}^{\ell_n} B_k}f_n^{\ell_n}(s)ds \geq \frac{1}{3}\sum_{k=m+1}^{\ell_n} a_k  a_{k1}.\end{align*}$$
Put $A_n = \cup _{i=k_n}^{\ell _n}B_i$ . The conclusion follows directly from (i) and (ii).
Proof of Theorem 3.4
Put $\mathscr {T} = \{T^{(x^*,x)}:(x^*,x)\in A\times B\}$ . Take a family $(D_T)_{T}$ as in Definition 3.3. For each $T\in \mathscr {T}$ , we have
Both $(\varepsilon _k)_k$ and $(\delta _k)_k$ tend to zero. For each $T\in \mathscr {T}$ , denote by $(a^T_I)_{I\in \mathcal {D}^+}$ the entries of $D_T$ .
Claim: Fix $\sigma = (I_k)_{k=1}^\infty \in [\mathcal {D}^+]$ and $r>0$ . Then there exists $k_0\in \mathbb {N}$ so that for all $T\in \mathscr {T}$ , we have $\sum _{k=k_0}^\infty a^T_{I_k}  a^T_{I_{k1}} \leq r$ .
We will assume that the claim is true and proceed with the rest of the proof. For every $N,k_0,\in \mathbb {N}$ , let
which is a closed subset of $[\mathcal {D}^+]$ , and by the claim, we have $\cup _{k_0}\mathscr {A}_{N,k_0} = [\mathcal {D}^+]$ for all $N\!\in \!\mathbb {N}$ . We may therefore pick a strictly increasing sequence of natural numbers $(k_N)$ so that for each N, we have $\mathscr {A}_{N,k_N} \geq 1\eta /2^N$ . We put $\mathscr {A} = \cap _N\mathscr {A}_{N,k_N}$ , and we demonstrate that this is the desired set.
To show that $\{TP_{\mathscr {A}}:T\in \mathscr {T}\}$ is relatively compact with respect to the operator norm, we fix $\varepsilon>0$ and $(T_n)_n$ in $\mathscr {T}$ . For each $n\in \mathbb {N}$ , denote $D_n = D_{T_n}$ . We will find $M\in [\mathbb {N}]^{\infty }$ so that for all $n,m\in M$ , we have $\T_nP_{\mathscr {A}}  T_mP_{\mathscr {A}}\ \leq 11\varepsilon $ . Fix $N\in \mathbb {N}$ so that $2^{N}\leq \varepsilon $ , $\varepsilon _{k_N}\leq \varepsilon $ , and $\delta _{k_N}\leq \varepsilon $ . For each $n\in \mathbb {N}$ , write
Then we have $\A_n\ \leq \varepsilon _{k_N}\leq \varepsilon $ and $\B_n\ \leq \delta _{k_N}\leq \varepsilon $ . By passing to a subsequence of $(T_n)$ , we may assume that for all $n,m\in \mathbb {N}$ , we have (letting $a_I^n=a^{T_n}_I$ )
Since the $C_n$ are bounded elements of a finitedimensional space, we can also assume that $\C_n  C_m\ \leq \varepsilon $ , for $m,n\in \mathbb {N}$ . Therefore, for $n,m\in \mathbb {N}$ , we have
Luckily, the remaining quantity $\Lambda $ is the norm of a Haar multiplier on $L_1$ , and we know how to compute this. If for $\sigma = (I_k)_{k=0}^\infty \in \mathscr {A}$ , we put
Then by Remark 2.8, $\Lambda = \sup _{\sigma \in \mathscr {A}}\Lambda _\sigma $ and thus $\T_nP_{\mathscr {A}}  T_mP_{\mathscr {A}}\\leq 11\varepsilon $ .
We now provide the required proof of the claim. We fix $\sigma = (I_k)_{k=0}^\infty $ , with associated signs $(\theta _k)_{k=0}^\infty $ . Let us assume that the conclusion fails. Then we may find $(T_n)_n = (T^{(x^*_n,x_n)})_n$ in $\mathscr {T}$ , each $T_n$ is associated with a $D_n$ (each $D_n$ has entries $(a_I^n)_{I\in \mathcal {D}^+}$ ) and $k_1 < \ell _1<k_2<\ell _2<\cdots $ so that for all $n\in \mathbb {N}$
Pick $k_0\in \mathbb {N}$ so that $\varepsilon _{k_0}\leq r/12$ . For $k,n\in \mathbb {N}$ define $b_k^n = 0$ if $k\leq k_0$ and $b_k^n = a_{I_k}^{n}$ if $k>k_0$ . If we additionally assume that $k_1>k_0$ , then for all $n\in \mathbb {N}$ , we have
For each $n,\ell \in \mathbb {N}$ , put
By Lemma 3.6, we may find a sequence of $(A_n)_n$ of disjoint measurable sets so that for each $n\in \mathbb {N}$ , the sequence $(f_n^\ell (s))_{\ell \geq \ell _n}$ is constant for all $s\in A_n$ and $\f_n^{\ell _n}_{A_n}\_{L_1} \geq r/3$ . For each $n\in \mathbb {N}$ fix $g_n$ in the unit sphere of $L_\infty $ with support in $A_n$ so that for all $\ell \geq \ell _n$
Note that for all $\ell \in \mathbb {N}$ , $\\phi _\ell \_{L_1}\leq 2$ . Then for all $n\in \mathbb {N}$ and $\ell \geq \ell _n$ ,
Pick an $L\in [\mathbb {N}]^\infty $ so that for each $m,n\in \mathbb {N}$ , the limit
Because the sequence $(g_m)_m$ is disjointly supported, an identical calculation as in equation (30) yields that for all $n\in \mathbb {N}$ , we have
Thus, by Rosenthal’s Lemma 3.1, we may pass to a subsequence and relabel so that for all $n_0\in \mathbb {N}$ , we have $\sum _{m\neq n_0}\xi _{n_0}(m) \leq r/8$ .
We will show that $(x_n)_n$ must be equivalent to the unit vector basis of $\ell _1$ , which would contradict our assumption and thus finish the proof. Fix scalars $a_1,\ldots ,a_N$ , and for $\ell \in L$ with $\ell \geq \ell _N$ pick appropriate scalars $\zeta ^\ell _1,\ldots ,\zeta ^\ell _N$ of modulus one so that we have
and put
But also,
Therefore, $\\sum _{n=1}^Na_nx_n\ \geq r/(16\T\\sup _{x^*\in A}\x^*\)\sum _{n=1}^Na_n$ .
4 Projectional factors of Xdiagonal operators
The main purpose of the section is to prove the following first step towards the final result. The Fundamental Lemma (Theorem 3.4) is a necessary part of the proof.
Theorem 4.1. Let X be in $\mathcal H^*$ , and let $T:L_1(X)\to L_1(X)$ be a bounded linear operator. Then for every $\varepsilon>0$ , T is a $1$ projectional factor with error $\varepsilon $ of an Xdiagonal operator $S:L_1(X)\to L_1(X)$ .
The strategy is to first pass to an operator S with the family $(S^{(L,M)})_{L\neq M}$ uniformly eventually close to Haar multipliers (in reality, S satisfies something slightly stronger). We will then use the Fundamental Lemma to eliminate these entries. The following result states how uniform eventual proximity to Haar multipliers is achieved in practice.
Lemma 4.2. Let $\mathscr {T}$ be a subset of $\mathcal {L}(L_1)$ and $(\varepsilon _{(I,J)})_{(I,J)\in \mathcal {D}^+\times \mathcal {D}^+}$ be a summable collection of positive real numbers. If for every $I\neq J\in \mathcal {D}^+$ and $T\in \mathscr {T}$ , we have $\big \langle h_I, T\big (J^{1}h_J\big )\rangle \big \leq \varepsilon _{(I,J)}$ , then $\mathscr {T}$ is uniformly eventually close to Haar multipliers.
Proof. For fixed $T\in \mathscr {T}$ , put $a_I = \langle h_I, T(I^{1}h_I)\rangle $ . This collection defines a bounded Haar multiplier $D_T$ because for all f in the unit ball of $L_1$ , $\(TD_T)f\ \leq \sum _{I\in \mathcal {D}^+}\sum _{\{J\in \mathcal {D}^+: J\neq I\}}\langle h_I,T(J^{1}h_J)\rangle  <\infty $ . Also, for all $n\in \mathbb {N}$ ,
Both $(\varepsilon _n)_n$ and $(\delta _n)_n$ tend to zero.
The next lemma is the basic tool used to achieve the first step.
Lemma 4.3. Let X be in $\mathcal H^*$ and $\mathscr {T}\subset \mathcal {L}(X)$ , $G\subset X^*$ and $F\subset X$ be finite sets. Then for any $\varepsilon>0$ , there exists $i_0\in \mathbb {N}$ so that for any disjoint collection $\Delta $ of $\mathcal {D}^+$ with $\min \iota (\Delta )\geq i_0$ and any $\theta \in \{1,1\}^\Delta $ , we have
(recall that $h^{\theta }_\Delta $ was introduced before Definition 2.10).
Proof. The result is an immediate consequence of the following fact: let $(\Delta _k)$ be a sequence of finite disjoint collections of $\mathcal {D}^+$ with $\lim _k\min \iota (\Delta _k) = \infty $ , and for every $k\in \mathbb {N}$ let $\theta _k\in \{1,1\}^{\Delta _k}$ .

(a) The sequence $(h_{\Delta _k}^{\theta _k})_k$ is weakly null.

(b) The sequence $(h_{\Delta _k}^{\theta _k})_k$ is a bounded block sequence in $X^*$ and thus is $w^*$ null.
There is nothing further to say about statement (b). We now explain how statement (a) is achieved. Note that any sequence of independent $\{1,1\}$ valued random variables of mean $0$ is distributionally equivalent to $(r_n)_n$ and thus weakly null. Any sequence as in statement (a) has a subsequence that is of the form $(\frac {r_n+r^{\prime }_n}2)$ , where $(r_n)$ and $(r^{\prime }_n)$ are both sequences of independent $\{1,1\}$ valued random variables of mean $0$ . Thus, it is weakly null as well.
We carry out the first step towards the proof of Theorem 4.1
Proposition 4.4. Let X be in $\mathcal H^{*}$ , and denote by C the set of all pairs $(g,f)$ in $B_{X^*}\times B_X$ so that g and f have finite and disjoint supports with respect to the Haar system. Then every bounded linear operator $T:L_1(X)\to L_1(X)$ is a 1projectional factor of a bounded linear operator $S:L_1(X) \to L_1(X)$ so that the family $\{S^{(f,g)}:(f,g)\in C\}$ is uniformly eventually close to Haar multipliers.
Proof. We will inductively construct faithful Haar systems $(\widetilde h_I)_{I\in \mathcal {D}^+}$ and $(\widehat h_L)_{L\in \mathcal {D}^+}$ . In each step k of the induction, we will define $\widetilde h_I$ and then $\widehat h_L$ with $k = \iota (I) = \iota (L)$ (i.e., $I=L$ , but we separate the notation for clarity). These vectors are of the form $\widetilde h_I = \sum _{J\in \Delta _I}h_J$ and