Skip to main content Accessibility help
×
Home
Hostname: page-component-55597f9d44-2qt69 Total loading time: 0.928 Render date: 2022-08-13T06:49:08.845Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "useRatesEcommerce": false, "useNewApi": true } hasContentIssue true

Equivalency of multi-state survival signatures of multi-state systems of different sizes and its use in the comparison of systems

Published online by Cambridge University Press:  10 June 2022

He Yi
Affiliation:
School of Economics and Management, Beijing University of Chemical Technology, Beijing 100029, China. E-mail: yihe@mail.buct.edu.cn
Narayanaswamy Balakrishnan
Affiliation:
Department of Mathematics and Statistics, McMaster University, Hamilton L8S 4K1, Ontario, Canada. E-mail: bala@mcmaster.ca
Xiang Li
Affiliation:
School of Economics and Management, Beijing University of Chemical Technology, Beijing 100029, China. E-mail: lixiang@mail.buct.edu.cn
Rights & Permissions[Opens in a new window]

Abstract

In this paper, the multi-state survival signature is first redefined for multi-state coherent or mixed systems with independent and identically distributed (i.i.d.) multi-state components. With the assumption of independence of component lifetimes at different state levels, transformation formulas of multi-state survival signatures of different sizes are established through the use of equivalent systems and a generalized triangle rule for order statistics from several independent and non-identical distributions. The results obtained facilitate stochastic comparisons of multi-state coherent or mixed systems with different numbers of i.i.d. multi-state components. Specific examples are finally presented to illustrate the transformation formulas established here, and also their use in comparing systems of different sizes.

Type
Research Article
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press

1. Introduction

Signature theory, as an important part in the theory of reliability, contributes fundamentally in describing structures of reliability systems and in facilitating stochastic comparisons of different systems. The concept of system signature, proposed originally by Samaniego [Reference Samaniego22], is a vector $\boldsymbol{s} = ({s_1}, \ldots ,{s_n})$, with element ${s_i}$ being the probability that the failure of a coherent system is caused by the $i\textrm{th}$ ordered failure from its n independent and identically distributed (i.i.d.) components. For coherent systems with the same number of i.i.d. components, Kochar et al. [Reference Kochar, Mukerjee and Samaniego13] established that the usual stochastic ordering, hazard rate ordering and likelihood ratio ordering in system signatures lead to corresponding orderings of system lifetimes. More theoretical results and applications of the system signature can be found in the book by Samaniego [Reference Samaniego23]. Stochastic comparisons of coherent systems have been discussed based on the system signature in different ways recently; for example, with components ordered in hazard rate/reverse hazard rate/likelihood ratio ordering [Reference Amini-Seresht, Khaledi and Kochar1], with exchangeable or dependent non-exchangeable components [Reference Navarro and Fernandez-Sanchez18], with different types or even different sizes of components [Reference Ding, Fang and Zhao9], with information of system state or number of failed components under single/double monitoring [Reference Goli12], or by taking both performance and cost into consideration [Reference Lindqvist, Samaniego and Wang15].

As discussed by Yi and Cui [Reference Yi and Cui31], there are many efficient methods for computing the system signature and each has its own advantages and limitations. Several related concepts have also been discussed in the literature; for example, minimal/maximal signature [Reference Navarro, Ruiz and Sandoval19], dynamic signature [Reference Samaniego, Balakrishnan and Navarro24], joint signature [Reference Navarro, Samaniego and Balakrishnan20] and ordered system signature [Reference Balakrishnan and Volterman2] are some important ones among them. Survival signature, as a generalization of the system signature, was originally proposed by Coolen and Coolen-Maturi [Reference Coolen, Coolen-Maturi, Zamojski, Mazurkiewicz, Sugier, Walkowiak and Kacprzyk5] for the survival of systems with multiple types of components, and are now widely used for studying many practical systems like large complex networks [Reference Behrensdorf, Regenhardt, Broggi and Beer3]. A similar concept, called joint survival signature, has been presented recently by Coolen-Maturi et al. [Reference Coolen-Maturi, Coolen and Balakrishnan6] for coherent systems with shared components.

The above concepts have all focused on binary-state systems, while for multi-state systems [Reference Lisnianski, Frenkel and Ding16,Reference Natvig17] which are more practical in the field of reliability, related discussions on signature theory have also been made in the literature. For example, for multi-state systems with binary-state components, there are some concepts such as multi-dimensional D-spectrum [Reference Gertsbakh, Shpungin, Lisnianski and Frenkel11], bivariate signature [Reference Da, Hu, Li and Li8], multi-state ordered signature [Reference Yi, Balakrishnan and Cui25] and multi-state joint signature [Reference Yi, Balakrishnan and Cui28]. As for multi-state systems with multi-state components, Eryilmaz and Tuncel [Reference Eryilmaz and Tuncel10] introduced multi-state survival signature based on a natural generalization of the survival signature of Coolen and Coolen-Maturi [Reference Coolen, Coolen-Maturi, Zamojski, Mazurkiewicz, Sugier, Walkowiak and Kacprzyk5]. Related discussions on computational methods for the multi-state survival signature can be found in Yi et al. [Reference Yi, Balakrishnan and Cui27,Reference Yi, Balakrishnan and Li29].

There are many theories and methods that are useful in the study of multi-state systems; for example, Markov and semi-Markov models, universal generating function methods, combined methods and fuzzy methods [Reference Lisnianski, Frenkel and Ding16]. However, when it comes to description of their system structures, signature theory has its unique advantages over traditional methods, especially for large complex systems whose structures are too complex to be represented by structural functions. Irrespective of whether one has binary-state systems or multi-state systems, it is known that signatures are vectors or matrices whose dimensions are determined by the number of components (i.e., the system size). This means that signature representations can still be simple even for large complex systems. Moreover, they can be calculated by Monte Carlo simulations no matter how complex the system structures are, and there are also other efficient computational methods available for different types of systems [Reference Yi, Balakrishnan and Cui27,Reference Yi, Balakrishnan and Li29,Reference Yi and Cui31].

Signature and its related concepts play a vital role in stochastic comparisons of systems [Reference Burkschat and Navarro4,Reference Zarezadeh, Asadi and Eftekhar32]. For systems of same size, stochastic comparisons of them can be carried out directly based on orderings of their signatures [Reference Kochar, Mukerjee and Samaniego13]. However, for systems of different sizes, some transformation formulas are required to transform the signature of smaller dimension to its counterpart of larger dimension [Reference Navarro, Samaniego, Balakrishnan and Bhattacharya21]. For binary-state systems [Reference Lindqvist, Samaniego and Huseby14,Reference Navarro, Samaniego, Balakrishnan and Bhattacharya21] and multi-state systems with binary-state components [Reference Yi, Balakrishnan and Cui26,Reference Yi, Balakrishnan and Li30], these transformation formulas have already been established which facilitate stochastic comparisons of those systems of different sizes. But, in the case of multi-state systems with multi-state components, the problem becomes quite complex with different component lifetime distributions at different state levels to be taken care of. For tackling this issue, in this work, we first redefine the concept of multi-state survival signature in Yi et al. [Reference Yi, Balakrishnan and Cui27] for multi-state systems with multi-state components, and then establish transformation formulas for multi-state survival signatures of different sizes based on the assumption of independence of component lifetimes at different state levels.

The rest of this paper is organized as follows. In Section 2, the multi-state system survival signature is first redefined for multi-state coherent or mixed systems with multi-state components, and transformation formulas are then established for multi-state survival signatures of different sizes. Some illustrative examples are presented in Section 3 to demonstrate the transformation formulas established here, and then their usefulness in comparing systems of different sizes is demonstrated in Section 4 with numerical examples. Finally, some concluding remarks are made in Section 5.

2. Comparisons of multi-state systems of different sizes

For multi-state coherent systems with i.i.d. multi-state components, Yi et al. [Reference Yi, Balakrishnan and Cui27] have defined their multi-state survival signature in a matrix form as follows.

Definition 2.1. Let ${T_j}$ $(j = 1, \ldots ,M)$ be the first time that a multi-state coherent system, having n i.i.d. multi-state components and a state space $\Omega = \{ 0, \ldots ,M\}$ for both the system and the components, enters state $j - 1$ or below. Furthermore, for $j = 1, \ldots ,M,$ let $X_j^{(i)}$ $(i = 1, \ldots ,n)$ be i.i.d. random variables with a common absolutely continuous distribution ${F_j}(x),\textrm{ }x \ge 0$, with $X_j^{(i)}$ being the first time that component i enters state $j - 1$ or below. Suppose the system and the components start at perfect functioning state $M,$ degrade into imperfect functioning states $M - 1, \ldots ,1$ successively and finally enter the complete failure state $0$. Then, the multi-state survival signature of the system can be defined as $\boldsymbol{S} = ({\boldsymbol{S}^{(0)}}, \ldots ,{\boldsymbol{S}^{(M)}})$, where ${\boldsymbol{S}^{(j)}} = ({S_{{i_1}, \ldots ,{i_M}}^{(j)},0 \le {i_1}, \ldots ,{i_M} \le n} )$ $(j = 0, \ldots ,M)$ is the multi-state survival signature at system state level $j,$ with

$$S_{{i_1}, \ldots ,{i_M}}^{(j)} = P\left\{ {{T_j} > t\left|{{m_0}(t) = {i_0}, \ldots ,{m_{M - 1}}(t) = {i_{M - 1}},{m_M}(t) = {i_M}: = n - \sum\limits_{w = 0}^{M - 1} {{i_w}} } \right.} \right\}$$

being the conditional probability that the system is in state j or above at time $t$, given ${m_l}(t) = {i_l}$ components in state $l$, for all $l = 0, \ldots ,M$.

Usually, as in [Reference Lindqvist, Samaniego and Huseby14,Reference Navarro, Samaniego, Balakrishnan and Bhattacharya21,Reference Yi, Balakrishnan and Cui26,Reference Yi, Balakrishnan and Li30], comparisons of systems of different sizes can be carried out based on the fact that any binary/multi-state system can be regarded as a mixture of several k-out-of-n type systems. For that purpose, it will be better if a consecutive type system has a simple form of signature vector/matrix, which leads to a modified definition of multi-state survival signature as follows.

Definition 2.2. With notations defined in Definition 2.1, for a multi-state coherent or mixed system with n i.i.d. multi-state components, its multi-state survival signature can be defined as $\boldsymbol{S} = ({\boldsymbol{S}^{(0)}}, \ldots ,{\boldsymbol{S}^{(M)}})$, where ${\boldsymbol{S}^{(j)}} = ({S_{{i_1}, \ldots ,{i_M}}^{(j)},0 \le {i_1}, \ldots ,{i_M} \le n} )$ $(j = 0, \ldots ,M)$ is the multi-state survival signature at system state level $j,$ with

$$S_{{i_1}, \ldots ,{i_M}}^{(j)} = P\{{{T_j} > t|{{m_1}(t) = {i_1}, \ldots ,{m_M}(t) = {i_M}} } \}$$

being the conditional probability that the system is in state j or above at time $t$, given ${m_l}(t) = {i_l}$ components in state l or above, for all $l = 1, \ldots ,M$.

Remark 2.1.

  1. (1) The new definition is different from Definition 2.1 only in the definition of ${m_l}(t)$ except that it can also be applied for mixed systems. As in the discussions of Yi et al. [Reference Yi, Balakrishnan and Cui27], $S_{{i_1}, \ldots ,{i_M}}^{(j)}$ $(j = 0,1, \ldots ,M)$ are independent of time t and is defined in a way similar to that in Eryilmaz and Tuncel [Reference Eryilmaz and Tuncel10].

  2. (2) $S_{{i_1}, \ldots ,{i_M}}^{(j)} = S_{\max ({i_1}, \ldots {i_M}), \ldots ,\max ({i_{M - 1}},{i_M}),{i_M}}^{(j)}$ $(j = 0, \ldots ,M),$ which leads to two ways of representing ${\boldsymbol{S}^{(j)}}$:

    1. 1. Keep all the elements $S_{{i_1}, \ldots ,{i_M}}^{(j)},0 \le {i_1}, \ldots ,{i_M} \le n$, and relabel subscripts $({i_1}, \ldots ,{i_M})$ as $\sum\nolimits_{j = 1}^M {{i_j}{{(n + 1)}^{j - 1}}} + 1$. For example, when $n = 2$ and $M = 2$, we have

      $${\boldsymbol{S}^{(j)}} = (S_1^{(j)}, \ldots ,S_9^{(j)}) = {(S_{0,0}^{(j)},S_{1,0}^{(j)},S_{2,0}^{(j)},S_{0,1}^{(j)},S_{1,1}^{(j)},S_{2,1}^{(j)},S_{0,2}^{(j)},S_{1,2}^{(j)},S_{2,2}^{(j)})^T},$$
      with $S_{0,1}^{(j)} = S_{1,1}^{(j)},S_{0,2}^{(j)} = S_{1,2}^{(j)} = S_{2,2}^{(j)};$
    2. 2. Delete all $S_{{i_1}, \ldots ,{i_M}}^{(j)}$ that do not satisfy $0 \le {i_M} \le \cdots \le {i_1} \le n,$ and then relabel subscripts $({i_1}, \ldots ,{i_M})$ according to formula (9) in [Reference Cui, Gao and Mo7] as

      $$1 + \sum\nolimits_{j = 1}^{M - 1} {\sum\nolimits_{l = {i_{j + 1}}}^{{i_j} - 1} {\left( {\begin{matrix} {n - l + j - 1}\\ {j - 1} \end{matrix}} \right)} } + \sum\nolimits_{l = 0}^{{i_M} - 1} {\left( {\begin{matrix} {n - l + M - 1}\\ {M - 1} \end{matrix}} \right)} .$$

For example, when $n = 2$ and $M = 2$, we have

$${\boldsymbol{S}^{(j)}} = (S_1^{(j)}, \ldots ,S_6^{(j)}) = {(S_{0,0}^{(j},S_{1,0}^{(j)},S_{2,0}^{(j)},S_{1,1}^{(j)},S_{2,1}^{(j)},S_{2,2}^{(j)})^T}.$$

In this work, we adopt the latter for the sake of brevity and convenience;

  1. (3) ${\boldsymbol{S}^{({j_2})}} \le {\boldsymbol{S}^{({j_1})}}$ $(0 \le {j_1} < {j_2} \le M)$, namely, $S_{{i_1}, \ldots ,{i_M}}^{({j_2})} \le S_{{i_1}, \ldots ,{i_M}}^{({j_1})}$ for all $0 \le {i_M} \le \cdots \le {i_1} \le n$. Also, ${\boldsymbol{S}^{(0)}} = ({S_{{i_1}, \ldots ,{i_M}}^{(0)},0 \le {i_M} \le \cdots \le {i_1} \le n} )$, with $S_{{i_1}, \ldots ,{i_M}}^{(0)} = 1$ for all $0 \le {i_M} \le \cdots \le {i_1} \le n$, and such a determined matrix can be denoted by $\boldsymbol{S}_n^{(0)}$ for all systems of size n.

Now, for comparing multi-state coherent or mixed systems with multi-state components and of different sizes, we shall assume that the component lifetimes $X_j^{(1)}, \ldots ,X_j^{(n)}$ are independent for different j $(j = 1, \ldots ,M)$. Then, for establishing a relationship between multi-state survival signatures of two equivalent multi-state systems, an extended triangle rule as in Navarro et al. [Reference Navarro, Samaniego, Balakrishnan and Bhattacharya21] needs to be presented first.

Theorem 2.1. Suppose the random variables $X_j^{(1)}, \ldots ,X_j^{(n + 1)}$ $(j = 1, \ldots ,M)$ are i.i.d. with a common absolutely continuous distribution ${F_j}(x),\textrm{ }x \ge 0$, and are independent for different $j$. Then, for $1 \le {k_{1,j}} \le \cdots \le {k_{{r_j},j}} \le n$ $(j = 1, \ldots ,M,{r_j} = 1, \ldots ,M)$, the order statistics vector $(X_j^{({k_{i,j}} :n)},j = 1, \ldots ,M,i = 1, \ldots ,{r_j})$ has the same distribution as

$$(X_j^{({k_{i,j}} + {I_{\{ i > {a_j}\} }}:n + 1)},j = 1, \ldots ,M,i = 1, \ldots ,{r_j})$$

with probability

$${(n + 1)^{ - M}}\prod\limits_{j = 1}^M {\left\{ {{{({k_{1,j}})}^{{I_{\{ {a_j} = 0\} }}}}\left[ {\prod\limits_{l = 1}^{{r_j} - 1} {{{({k_{l + 1,j}} - {k_{l,j}})}^{{I_{\{ {a_j} = l,{k_{l + 1,j}} > {k_{l,j}}\} }}}}} } \right]{{(n + 1 - {k_{{r_j},j}})}^{{I_{\{ {a_j} = {r_j}\} }}}}} \right\}}$$

for all $({a_1}, \ldots ,{a_M}) \in \textbf{A} = \{ ({a_1}, \ldots ,{a_M}):{a_j} \in \{ 0, \ldots ,{r_j}\} ,\textrm{ for all }j = 1, \ldots ,M\}$.

Proof. According to the proof of Theorem 2.1 in Yi et al. [Reference Yi, Balakrishnan and Cui26], we find that for any $j = 1, \ldots ,M$, the order statistics vector $(X_j^{({k_{1,j}}:n)}, \ldots ,X_j^{({k_{{r_j},j}}:n)})$ has the same distribution as $(X_j^{({k_{1,j}} + 1:n + 1)}, \ldots ,X_j^{({k_{{r_j},j}} + 1:n + 1)})$ with probability ${k_{1,j}}/(n + 1)$, as $(X_j^{({k_{1,j}}:n + 1)}, \ldots ,X_j^{({k_{{r_j},j}}:n + 1)})$ with probability $(n + 1 - {k_{{r_j},j}})/(n + 1)$, and as $(X_j^{({k_{1,j}}:n + 1)}, \ldots ,X_j^{({k_{l,j}}:n + 1)},X_j^{({k_{l + 1,j}} + 1:n + 1)}, \ldots ,X_j^{({k_{{r_j},j}} + 1:n + 1)})$ with probability $({k_{l + 1,j}} - {k_{l,j}})/(n + 1)$ for all $l = 1, \ldots ,{r_j} - 1$. This result implies that the order statistics vector $(X_j^{({k_{i,j}}:n)},\textrm{ }i = 1, \ldots ,{r_j})$ has the same distribution as $(X_j^{({k_{i,j}} + {I_{\{ i > {a_j}\} }}:n + 1)},\textrm{ }i = 1, \ldots ,{r_j})$ with probability

$${(n + 1)^{ - 1}}{({k_{1,j}})^{{I_{\{ {a_j} = 0\} }}}}\left[ {\prod\nolimits_{l = 1}^{{r_j} - 1} {{{({k_{l + 1,j}} - {k_{l,j}})}^{{I_{\{ {a_j} = l,{k_{l + 1,j}} > {k_{l,j}}\} }}}}} } \right]{(n + 1 - {k_{{r_j},j}})^{{I_{\{ {a_j} = {r_j}\} }}}}$$

for all ${a_j} = 0, \ldots ,{r_j}$. With the independence of $X_j^{(1)}, \ldots ,X_j^{(n + 1)}$ for different j, the required result follows readily.

Remark 2.2. Specifically, for $M = 2$ and $1 \le {k_{1,j}} \le {k_{2,j}} \le n$ $(j = 1,2)$, the order statistics vector $(X_1^{({k_{1,1}}:n)},X_1^{({k_{2,1}}:n)},X_2^{({k_{1,2}}:n)},X_2^{({k_{2,2}}:n)})$ has the same distribution as

$$(X_1^{({k_{1,1}} + {I_{\{ {a_1} = 0\} }}:n + 1)},X_1^{({k_{2,1}} + {I_{\{ {a_1} = 0,1\} }}:n + 1)},X_2^{({k_{1,2}} + {I_{\{ {a_2} = 0\} }}:n + 1)},X_2^{({k_{2,2}} + {I_{\{ {a_2} = 0,1\} }}:n + 1)})$$

with probability

$${(n + 1)^{ - 2}}\prod\limits_{j = 1}^2 {[{{{({k_{1,j}})}^{{I_{\{ {a_j} = 0\} }}}}{{({k_{2,j}} - {k_{1,j}})}^{{I_{\{ {a_j} = 1,{k_{2,j}} > {k_{1,j}}\} }}}}{{(n + 1 - {k_{2,j}})}^{{I_{\{ {a_j} = 2\} }}}}} ]}$$

for all $({a_1},{a_2})$ such that ${a_1},{a_2} \in \{ 0,1,2\}$.

With the use of Theorem 2.1, the relationship between multi-state survival signatures for multi-state coherent or mixed systems with different numbers of multi-state components can be discussed. First, we need to introduce matrix $\boldsymbol{k} = ({k_{i,j}},i = 1, \ldots ,M,j = 1, \ldots ,M)$ such that $0 \le {k_{i,j}} \le {k_{\tilde{i},\tilde{j}}} \le n$ for any $1 \le i < \tilde{i} \le M,1 \le \tilde{j} < j \le M$, corresponding to a multi-state $\boldsymbol{k}$-out-of-$n$:$G$ system, with n i.i.d. multi-state components and a state space $\Omega = \{ 0, \ldots ,M\}$ for both the system and the components, being in state i $(i = 1, \ldots ,M)$ or above if and only if there are at least ${k_{i,j}}$ $(j = 1, \ldots ,M)$ components in state j or above. Evidently, the lifetime of such a system can be represented through component lifetimes as

$${T_i} = \min (X_1^{(n + 1 - {k_{i,1}}:n)}, \ldots ,X_M^{(n + 1 - {k_{i,M}}:n)}),\,i = 1, \ldots ,M,$$

with $X_1^{(n + 1:n)} = \cdots = X_M^{(n + 1:n)} ={+} \infty$, and its multi-state survival signature can be given as ${\boldsymbol{S}_{\boldsymbol{k}:n}} = (\boldsymbol{S}_n^{(0)},\boldsymbol{S}_{\boldsymbol{k}:n}^{(1)}, \ldots ,\boldsymbol{S}_{\boldsymbol{k}:n}^{(M)})$, where $\boldsymbol{S}_{\boldsymbol{k}:n}^{(i)} = \boldsymbol{S}_{{\boldsymbol{k}_i}:n}^{(i)} = ({S_{{\boldsymbol{k}_i};{i_1}, \ldots ,{i_M}}^{(i)},0 \le {i_M} \le \cdots \le {i_1} \le n} )$ $(i = 1, \ldots ,M)$ with ${\boldsymbol{k}_i} = ({k_{i,1}}, \ldots ,{k_{i,M}})$ and

$$S_{{\boldsymbol{k}_i},{i_1}, \ldots ,{i_M}}^{(i)} = {I_{\{ {i_1} \ge {k_{i,1}}, \ldots ,{i_M} \ge {k_{i,M}}\} }},\;0 \le {i_M} \le \cdots \le {i_1} \le n.$$

For $j = 1, \ldots ,M$, note that $0 \le {k_{1,j}} \le \cdots \le {k_{M,j}} \le n$; by denoting ${r_j}$ for the number of zeros in ${k_{1,j}}, \ldots ,{k_{M,j}}$, we have ${k_{1,j}} = \cdots = {k_{{r_j},j}} = 0 < 1 \le {k_{{r_j} + 1,j}} \le \cdots \le {k_{M,j}} \le n$, that is,

$$1 \le n + 1 - {k_{M,j}} \le \cdots \le n + 1 - {k_{{r_j} + 1,j}} \le n < n + 1 = n + 1 - {k_{{r_j},j}} = \cdots = n + 1 - {k_{1,j}}.$$

Then, from Theorem 2.1, an equivalent system of size $n + 1$ for a multi-state $\boldsymbol{k}$-out-of-$n$:$G$ system has its lifetime as

$${T_i} = \min (X_1^{(n + 1 - {k_{i,1}} + {I_{\{ i \le {a_1}\} }}:n + 1)}, \ldots ,X_M^{(n + 1 - {k_{i,M}} + {I_{\{ i \le {a_M}\} }}:n + 1)}),\;i = 1, \ldots ,M,$$

with probability

$$\prod\limits_{j = 1}^M {{{\left\{ {{{(n + 1)}^{ - 1}}{{({k_{{r_j} + 1,j}})}^{{I_{\{ {a_j} = {r_j}\} }}}}\left[ {\prod\limits_{l = {r_j} + 1}^{M - 1} {{{({k_{l + 1,j}} - {k_{l,j}})}^{{I_{\{ {a_j} = l,{k_{l + 1,j}} > {k_{l,j}}\} }}}}} } \right]{{(n + 1 - {k_{M,j}})}^{{I_{\{ {a_j} = M\} }}}}} \right\}}^{{I_{\{ {r_j} < M\} }}}}}$$

for all $({a_1}, \ldots ,{a_M}) \in \mathscr{A} = \{ ({a_1}, \ldots ,{a_M}):{a_j} \in \{ {r_j}, \ldots ,M\} \;\textrm{for}\,\textrm{all}\,j = 1, \ldots ,M\}$. Note that for $i = 1, \ldots ,{r_j},\,\textrm{ }j = 1, \ldots ,M$, we have $X_j^{(n + 1 - {k_{i,j}} + {I_{\{ i \le {a_j}\} }}:n + 1)} = X_j^{(n + 2 - {k_{i,j}}:n + 1)} ={+} \infty$ with ${k_{i,j}} = 0$. Moreover, the equivalent system of size $n + 1$ has a multi-state survival signature $\boldsymbol{S}_{\boldsymbol{k}:n}^\ast{=} (\boldsymbol{S}_{n + 1}^{(0)},\boldsymbol{S}_{\boldsymbol{k}:n}^{{\ast} (1)}, \ldots ,\boldsymbol{S}_{\boldsymbol{k}:n}^{{\ast} (M)})$, where $\boldsymbol{S}_{\boldsymbol{k}:n}^{{\ast} (i)} = \boldsymbol{S}_{{\boldsymbol{k}_i}:n}^{{\ast} (i)} = ({S_{{\boldsymbol{k}_i},{i_1}, \ldots ,{i_M}}^{{\ast} (i)},0 \le {i_M} \le \cdots \le {i_1} \le n + 1} )$ $(i = 1, \ldots ,M)$ with

\begin{align*} S_{{\boldsymbol{k}_i},{i_1}, \ldots ,{i_M}}^{{\ast} (i)} & = \sum\limits_{({a_1}, \ldots ,{a_M}) \in \mathscr{A}} {\prod\limits_{j = 1}^M {\left\{ {{{(n + 1)}^{ - 1}}{{({k_{{r_j} + 1,j}})}^{{I_{\{ {a_j} = {r_j}\} }}}}\left[ {\prod\limits_{l = {r_j} + 1}^{M - 1} {{{({k_{l + 1,j}} - {k_{l,j}})}^{{I_{\{ {a_j} = l,{k_{l + 1,j}} > {k_{l,j}}\} }}}}} } \right]} \right.} } \\ & \quad \times { {{{(n + 1 - {k_{M,j}})}^{{I_{\{ {a_j} = M\} }}}}} \}^{{I_{\{ {r_j} < M\} }}}}{I_{\{ {i_1} \ge {k_{i,1}} + {I_{\{ i > {a_1}\} }}, \ldots ,{i_M} \ge {k_{i,M}} + {I_{\{ i > {a_M}\} }}\} }} \\ & = {(n + 1)^{ - M}}\prod\limits_{j = 1}^M {[{{k_{i,j}}{I_{\{ {i_1} \ge {k_{i,j}} + 1\} }} + (n + 1 - {k_{i,j}}){I_{\{ {i_1} \ge {k_{i,j}}\} }}} ]} ,\,0 \le {i_M} \le \cdots \le {i_1} \le n + 1. \end{align*}

Now, with the multi-state survival signature of an equivalent system of size $n + 1$ derived above for a multi-state $\boldsymbol{k}$-out-of-$n$:$G$ system, we are able to obtain the multi-state survival signature of an equivalent system of size $n + 1$ for any multi-state coherent or mixed system with n i.i.d. components by regarding it as a mixture of several multi-state $\boldsymbol{k}$-out-of-$n$:$G$ type systems, as established in the following theorem.

Theorem 2.2. Let $\boldsymbol{S} = ({\boldsymbol{S}^{(0)}}, \ldots ,{\boldsymbol{S}^{(M)}}),$ where ${\boldsymbol{S}^{(i)}} = ({S_{{i_1}, \ldots ,{i_M}}^{(i)},0 \le {i_M} \le \cdots \le {i_1} \le n} )$ $(i = 0,1, \ldots ,M)$, be the multi-state survival signature of a multi-state coherent or mixed system with n i.i.d. multi-state components and a state space $\Omega = \{ 0, \ldots ,M\}$ for both the system and the components. Suppose the component lifetimes $X_j^{(1)}, \ldots ,X_j^{(n)}$ $(j = 1, \ldots ,M)$ are i.i.d. with a common absolutely continuous distribution ${F_j}(x),\textrm{ }x \ge 0$, and are independent for different $j$. Then, its equivalent system of size $n + 1$ has its multi-state survival signature as

$${\boldsymbol{S}^\ast } = (\boldsymbol{S}_{n + 1}^{(0)},{\boldsymbol{S}^{{\ast} (1)}}, \ldots ,{\boldsymbol{S}^{{\ast} (M)}}) = \sum\limits_{\boldsymbol{k} \in \mathscr{K}} {{s_{\boldsymbol{k}}}\boldsymbol{S}_{\boldsymbol{k}:n}^\ast } ,$$

where

\begin{align*} \mathscr{K} & = \{ ({k_{i,j}},i = 1, \ldots ,M,j = 1, \ldots ,M):0 \le {k_{i,j}} \le {k_{\tilde{i},\tilde{j}}} \le n \\ & \quad \textrm{for any }\ 1 \le i < \tilde{i} \le M,\ 1 \le \tilde{j} < j \le M\} , \end{align*}

and ${s_{\boldsymbol{k}}},\boldsymbol{k} \in \mathscr{K}$, can be given as a solution to the set of linear equations

$$\sum\limits_{{\boldsymbol{k}_i} = \tilde{\boldsymbol{k}}} {{s_{\boldsymbol{k}}}} = s_{\tilde{\boldsymbol{k}}}^{(i)},\; \tilde{\boldsymbol{k}} \in {\tilde{\mathscr{K}}} = \{ ({k_1}, \ldots ,{k_M}):0 \le {k_M} \le \cdots \le {k_1} \le n\} ,\;i = 1, \ldots ,M,$$

with ${\boldsymbol{s}^{(i)}} = ({s_{\tilde{\boldsymbol{k}}}^{(i)},\tilde{\boldsymbol{k}} \in \tilde{\mathscr{K}}} )= {\boldsymbol{M}^{ - 1}}{\boldsymbol{S}^{(i)}}$ and

$$\boldsymbol{M} = ({{M_{{i_1}, \ldots ,{i_M};{j_1}, \ldots ,{j_M}}}, \ 0 \le {i_M} \le \cdots \le {i_1} \le n, \ 0 \le {j_M} \le \cdots \le {j_1} \le n} )$$

being a matrix with all elements ${M_{{i_1}, \ldots ,{i_M};{j_1}, \ldots ,{j_M}}} = {I_{\{ {i_1} \ge {j_1}, \ldots ,{i_M} \ge {j_M}\} }}$, for all $i = 1, \ldots ,M$.

Proof. Any multi-state survival signature $\boldsymbol{S} = ({\boldsymbol{S}^{(0)}}, \ldots ,{\boldsymbol{S}^{(M)}})$ can be regarded as a mixture of multi-state survival signatures ${\boldsymbol{S}_{\boldsymbol{k}:n}} = (\boldsymbol{S}_n^{(0)},\boldsymbol{S}_{{\boldsymbol{k}_1}:n}^{(1)}, \ldots ,\boldsymbol{S}_{{\boldsymbol{k}_M}:n}^{(M)})$ of multi-state $\boldsymbol{k}$-out-of-$n$:$G$ systems, namely $\boldsymbol{S} = \sum\nolimits_{\boldsymbol{k} \in \mathscr{K}} {{s_{\boldsymbol{k}}}{\boldsymbol{S}_{\boldsymbol{k}:n}}}$, with

\begin{align*} \mathscr{K} & = \{ ({k_{i,j}},i = 1, \ldots ,M,j = 1, \ldots ,M):0 \le {k_{i,j}} \le {k_{\tilde{i},\tilde{j}}} \le n \\ & \quad \;\textrm{for any } \ 1 \le i < \tilde{i} \le M, \ 1 \le \tilde{j} < j \le M\} . \end{align*}

Without loss of generality, ${s_{\boldsymbol{k}}},\boldsymbol{k} \in \mathscr{K}$, can be given by related marginal distributions

$$s_{\tilde{\boldsymbol{k}}}^{(i)},\tilde{\boldsymbol{k}} \in \tilde{\mathscr{K}} = \{ ({k_1}, \ldots ,{k_M}):0 \le {k_M} \le \cdots \le {k_1} \le n\} , \ i = 1, \ldots ,M.$$

Consider now ${\boldsymbol{S}^{(i)}} = \sum\nolimits_{\tilde{\boldsymbol{k}} \in \tilde{\mathscr{K}}} {s_{\tilde{\boldsymbol{k}}}^{(i)}\boldsymbol{S}_{\tilde{\boldsymbol{k}}:n}^{(i)}}$ $(i = 1, \ldots ,M)$ with

$$\boldsymbol{S}_{\tilde{\boldsymbol{k}}:n}^{(i)} = ({S_{\tilde{\boldsymbol{k}};{i_1}, \ldots ,{i_M}}^{(i)},0 \le {i_M} \le \cdots \le {i_1} \le n} ),\textrm{ }\tilde{\boldsymbol{k}} = ({k_1}, \ldots ,{k_M}),$$

and $S_{\tilde{\boldsymbol{k}};{i_1}, \ldots ,{i_M}}^{(i)} = {I_{\{ {i_1} \ge {k_1}, \ldots ,{i_M} \ge {k_M}\} }}$ for all $0 \le {i_M} \le \cdots \le {i_1} \le n$. Then, we have

$$S_{{i_1}, \ldots ,{i_M}}^{(i)} = \sum\limits_{\tilde{\boldsymbol{k}} \in \tilde{\mathscr{K}}} {s_{{\boldsymbol{k}_i}}^{(i)}{I_{\{ {i_1} \ge {k_1}, \ldots ,{i_M} \ge {k_M}\} }}} = \sum\limits_{{i_1} \ge {k_1}, \ldots ,{i_M} \ge {k_M},\tilde{\boldsymbol{k}} \in \tilde{\mathscr{K}}} {s_{\tilde{\boldsymbol{k}}}^{(i)}} .$$

For $i = 1, \ldots ,M$, let ${\boldsymbol{s}^{(i)}} = ({s_{\tilde{\boldsymbol{k}}}^{(i)},\tilde{\boldsymbol{k}} \in \tilde{\mathscr{K}}} )= ({s_{{k_1}, \ldots ,{k_M}}^{(i)},0 \le {k_M} \le \cdots \le {k_1} \le n} )$ be a column vector arranged in the same way as ${\boldsymbol{S}^{(i)}} = ({S_{{i_1}, \ldots ,{i_M}}^{(i)}, \ 0 \le {i_M} \le \cdots \le {i_1} \le n} )$, and

$$\boldsymbol{M} = ({{M_{{i_1}, \ldots ,{i_M};{j_1}, \ldots ,{j_M}}}, \ 0 \le {i_M} \le \cdots \le {i_1} \le n, \ 0 \le {j_M} \le \cdots \le {j_1} \le n} )$$

be an invertible matrix with all elements ${M_{{i_1}, \ldots ,{i_M};{j_1}, \ldots ,{j_M}}} = {I_{\{ {i_1} \ge {j_1}, \ldots ,{i_M} \ge {j_M}\} }}$, so that we have ${\boldsymbol{S}^{(i)}} = \boldsymbol{M}{\boldsymbol{s}^{(i)}}$, which yields ${\boldsymbol{s}^{(i)}} = {\boldsymbol{M}^{ - 1}}{\boldsymbol{S}^{(i)}}$. Then, the values of ${s_{\boldsymbol{k}}},\boldsymbol{k} \in \mathscr{K}$, can be obtained by solving the set of equations $\sum\nolimits_{{\boldsymbol{k}_i} = \tilde{\boldsymbol{k}}} {{s_{\boldsymbol{k}}} = s_{\tilde{\boldsymbol{k}}}^{(i)}, \tilde{\boldsymbol{k}} \in \tilde{\mathscr{K}}, \ i = 1, \ldots ,M}$. Note that even when the values of ${s_{\boldsymbol{k}}},\boldsymbol{k} \in \mathscr{K}$, are not unique, all of them lead to the same ${\boldsymbol{S}^\ast }$ since ${\boldsymbol{S}^\ast }$depends only on $s_{\tilde{\boldsymbol{k}}}^{(i)},\tilde{\boldsymbol{k}} \in \tilde{\mathscr{K}}, \ i = 1, \ldots ,M$. This completes the proof of the theorem.

Remark 2.3. Specifically, for $M = 2$, if the multi-state survival signature of the original system of size $4$ is written as

$$\boldsymbol{S} = ({\boldsymbol{S}^{(0)}},{\boldsymbol{S}^{(1)}},{\boldsymbol{S}^{(2)}}) = {\left( {\begin{matrix} {S_{0,0}^{(0)}}& {S_{1,0}^{(0)}}& {S_{2,0}^{(0)}}& {S_{3,0}^{(0)}}& {S_{4,0}^{(0)}}& {S_{1,1}^{(0)}}& {S_{2,1}^{(0)}}& {S_{3,1}^{(0)}}& {S_{4,1}^{(0)}}& {S_{2,2}^{(0)}}& {S_{3,2}^{(0)}}& {S_{4,2}^{(0)}}& {S_{3,3}^{(0)}}& {S_{4,3}^{(0)}}& {S_{4,4}^{(0)}}\\[6pt] {S_{0,0}^{(1)}}& {S_{1,0}^{(1)}}& {S_{2,0}^{(1)}}& {S_{3,0}^{(1)}}& {S_{4,0}^{(1)}}& {S_{1,1}^{(1)}}& {S_{2,1}^{(1)}}& {S_{3,1}^{(1)}}& {S_{4,1}^{(1)}}& {S_{2,2}^{(1)}}& {S_{3,2}^{(1)}}& {S_{4,2}^{(1)}}& {S_{3,3}^{(1)}}& {S_{4,3}^{(1)}}& {S_{4,4}^{(1)}}\\[6pt] {S_{0,0}^{(2)}}& {S_{1,0}^{(2)}}& {S_{2,0}^{(2)}}& {S_{3,0}^{(2)}}& {S_{4,0}^{(2)}}& {S_{1,1}^{(2)}}& {S_{2,1}^{(2)}}& {S_{3,1}^{(2)}}& {S_{4,1}^{(2)}}& {S_{2,2}^{(2)}}& {S_{3,2}^{(2)}}& {S_{4,2}^{(2)}}& {S_{3,3}^{(2)}}& {S_{4,3}^{(2)}}& {S_{4,4}^{(2)}} \end{matrix}} \right)^T}.$$

Then, the multi-state survival signature ${\boldsymbol{S}^\ast } = (\boldsymbol{S}_5^{(0)},{\boldsymbol{S}^{{\ast} (1)}},{\boldsymbol{S}^{{\ast} (2)}})$ of its equivalent system of size $5$ is given by ${\boldsymbol{S}^\ast } = \sum\nolimits_{\boldsymbol{k} \in \mathscr{K}} {{s_{\boldsymbol{k}}}\boldsymbol{S}_{\boldsymbol{k}:n}^\ast }$, where ${s_{\boldsymbol{k}}},\boldsymbol{k} \in \mathscr{K}$, with

$$\mathscr{K} = \{ ({k_{i,j}},i = 1,2,j = 1,2):0 \le {k_{1,2}} \le {k_{1,1}},{k_{2,2}} \le {k_{2,1}} \le 4\} ,$$

are given by the marginal distributions ${\boldsymbol{s}^{(i)}} = {\boldsymbol{M}^{ - 1}}{\boldsymbol{S}^{(i)}}$ $(i = 1,2)$ by solving

$$\left\{ \begin{matrix} \sum\limits_{({k_{11}},{k_{12}}) = ({k_1},{k_2})} {{s_{{k_{11}},{k_{12}};{k_{21}},{k_{22}}}}} = s_{{k_1},{k_2}}^{(1)},\textrm{ }({k_1},{k_2}) \in \tilde{\mathscr{K}},\\ \sum\limits_{({k_{21}},{k_{22}}) = ({k_1},{k_2})} {{s_{{k_{11}},{k_{12}};{k_{21}},{k_{22}}}}} = s_{{k_1},{k_2}}^{(2)},\textrm{ }({k_1},{k_2}) \in \tilde{\mathscr{K}}, \end{matrix} \right.$$

with $\tilde{\mathscr{K}} = \{ ({k_1},{k_2}):0 \le {k_2} \le {k_1} \le 4\}$ and

$$\boldsymbol{M} = \left( {\begin{matrix} 1& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 1& 1& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 1& 1& 1& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 1& 1& 1& 1& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 1& 1& 1& 1& 1& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 1& 1& 0& 0& 0& 1& 0& 0& 0& 0& 0& 0& 0& 0& 0\\ 1& 1& 1& 0& 0& 1& 1& 0& 0& 0& 0& 0& 0& 0& 0\\ 1& 1& 1& 1& 0& 1& 1& 1& 0& 0& 0& 0& 0& 0& 0\\ 1& 1& 1& 1& 1& 1& 1& 1& 1& 0& 0& 0& 0& 0& 0\\ 1& 1& 1& 0& 0& 1& 1& 0& 0& 1& 0& 0& 0& 0& 0\\ 1& 1& 1& 1& 0& 1& 1& 1& 0& 1& 1& 0& 0& 0& 0\\ 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 0& 0& 0\\ 1& 1& 1& 1& 0& 1& 1& 1& 0& 1& 1& 0& 1& 0& 0\\ 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 0\\ 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1 \end{matrix}} \right).$$

Also, note that $\boldsymbol{S}_{\boldsymbol{k}:4}^\ast{=} (\boldsymbol{S}_5^{(0)},\boldsymbol{S}_{{\boldsymbol{k}_1}:4}^{{\ast} (1)},\boldsymbol{S}_{{\boldsymbol{k}_2}:4}^{{\ast} (2)})$, where $\boldsymbol{S}_5^{(0)} = {(\underbrace{{1, \ldots ,1}}_{{21}})^T}$ and $\boldsymbol{S}_{{\boldsymbol{k}_i}:4}^{{\ast} (i)} = (S_{{\boldsymbol{k}_i};{i_1},{i_2}}^{{\ast} (i)}, 0 \le {i_2} \le {i_1} \le 5)$ $(i = 1,2)$, with

\begin{align*} S_{{\boldsymbol{k}_i};{i_1},{i_2}}^{{\ast} (i)} & = \dfrac{{(5 - {k_{i,1}})(5 - {k_{i,2}})}}{{25}}{I_{\{ {i_1} \ge {k_{i,1}},{i_2} \ge {k_{i,2}}\} }} + \dfrac{{(5 - {k_{i,1}}){k_{i,2}}}}{{25}}{I_{\{ {i_1} \ge {k_{i,1}},{i_2} \ge {k_{i,2}} + 1\} }} \\ & \quad + \dfrac{{{k_{i,1}}(5 - {k_{i,2}})}}{{25}}{I_{\{ {i_1} \ge {k_{i,1}} + 1,{i_2} \ge {k_{i,2}}\} }} + \dfrac{{{k_{i,1}}{k_{i,2}}}}{{25}}{I_{\{ {i_1} \ge {k_{i,1}} + 1,{i_2} \ge {k_{i,2}} + 1\} }},\,0 \le {i_2} \le {i_1} \le 5. \end{align*}

In Theorems 2.1 and 2.2, we have considered the equivalence of multi-state survival signatures of multi-state systems of sizes n and $n + 1$. Instead of using the theorem repeatedly, a more general result is presented now for the equivalence of multi-state survival signatures of multi-state systems of sizes n and $n + l$ $(l = 1,2, \ldots )$.

Theorem 2.3. Suppose the random variables $X_j^{(1)}, \ldots ,X_j^{(n + l)}$ $(j = 1, \ldots ,M)$ are i.i.d. with a common absolutely continuous distribution ${F_j}(x),x \ge 0$, and are independent for different $j$. Then, for $1 \le {k_{1,j}} \le \cdots \le {k_{{r_j},j}} \le n$ $(j = 1, \ldots ,M,{r_j} = 1, \ldots ,M)$, the order statistics vector $(X_j^{({k_{i,j}}:n)},j = 1, \ldots ,M,i = 1, \ldots ,{r_j})$ has the same distribution as

$$(X_j^{({h_{i,j}}:n + 1)},j = 1, \ldots ,M,i = 1, \ldots ,{r_j})$$

with probability

$${\left( {\begin{matrix} {n + l}\\ n \end{matrix}} \right)^{ - M}}\prod\limits_{j = 1}^M {\left\{ {\left( {\begin{matrix} {{h_{1,j}} - 1}\\ {{k_{1,j}} - 1} \end{matrix}} \right)\left[ {\prod\limits_{l = 1}^{{r_j} - 1} {{{\left( {\begin{matrix} {{h_{l + 1,j}} - {h_{l,j}} - 1}\\ {{k_{l + 1,j}} - {k_{l,j}} - 1} \end{matrix}} \right)}^{{I_{\{ {k_{l + 1,j}} > {k_{l,j}}\} }}}}} } \right]\left( {\begin{matrix} {n + l - {h_{{r_j},j}}}\\ {n - {k_{{r_j},j}}} \end{matrix}} \right)} \right\}}$$

for all $\boldsymbol{h} \in {\mathscr{H}_{\boldsymbol{k}}}$, with

\begin{align*} {\mathscr{H}_{\boldsymbol{k}}} & = \{ ({h_{i,j}},j = 1, \ldots ,M,i = 1, \ldots ,{r_j}):1 \le {h_{1,j}} \le \cdots \le {h_{{r_j},j}} \le n + l,{k_{1,j}} \le {h_{1,j}},\\ & \quad {k_{2,j}} - {k_{1,j}} \le {h_{2,j}} - {h_{1,j}}, \ldots ,{k_{{r_j},j}} - {k_{{r_j} - 1,j}} \le {h_{{r_j},j}} - {h_{{r_j} - 1,j}}\textrm{,}{h_{{r_j},j}} \le {k_{{r_j},j}} + l,\\ & \quad {I_{\{ {k_{2,j}} > {k_{1,j}}\} }}\textrm{ = }{I_{\{ {h_{2,j}} > {h_{1,j}}\} }}\textrm{,} \ldots \textrm{,}{I_{\{ {k_{{r_j},j}} > {k_{{r_j} - 1,j}}\} }} = {I_{\{ {h_{{r_j},j}} > {h_{{r_j} - 1,j}}\}}}\textrm{ for all }j\} . \end{align*}

Proof. According to the proof of Theorem 2.4 in Yi et al. [Reference Yi, Balakrishnan and Li30], we find that for any $j = 1, \ldots ,M$, the order statistics vector $(X_j^{({k_{1,j}}:n)}, \ldots ,X_j^{({k_{{r_j},j}}:n)})$ has the same distribution as $(X_j^{({h_{1,j}}:n)}, \ldots ,X_j^{({h_{{r_j},j}}:n)})$ with probability

$${\left( {\begin{matrix} {n + l}\\ n \end{matrix}} \right)^{ - 1}}\left( {\begin{matrix} {{h_{1,j}} - 1}\\ {{k_{1,j}} - 1} \end{matrix}} \right)\left[ {\prod\limits_{l = 1}^{{r_j} - 1} {{{\left( {\begin{matrix} {{h_{l + 1,j}} - {h_{l,j}} - 1}\\ {{k_{l + 1,j}} - {k_{l,j}} - 1} \end{matrix}} \right)}^{{I_{\{ {k_{l + 1,j}} > {k_{l,j}}\} }}}}} } \right]\left( {\begin{matrix} {n + l - {h_{{r_j},j}}}\\ {n - {k_{{r_j},j}}} \end{matrix}} \right)$$

for all $({h_{1,j}}, \ldots ,{h_{{r_j},j}})$ such that $1 \le {h_{1,j}} \le \cdots \le {h_{{r_j},j}} \le n + l$ and

\[\begin{matrix} {k_{1,j}} \le {h_{1,j}},{k_{2,j}} - {k_{1,j}} \le {h_{2,j}} - {h_{1,j}}, \ldots ,{k_{{r_j},j}} - {k_{{r_j} - 1,j}} \le {h_{{r_j},j}} - {h_{{r_j} - 1,j}}\textrm{,}{h_{{r_j},j}} \le {k_{{r_j},j}} + l,\\ {I_{\{ {k_{2,j}} > {k_{1,j}}\} }}\textrm{ = }{I_{\{ {h_{2,j}} > {h_{1,j}}\} }}\textrm{,} \ldots \textrm{,}{I_{\{ {k_{{r_j},j}} > {k_{{r_j} - 1,j}}\} }} = {I_{\{ {h_{{r_j},j}} > {h_{{r_j} - 1,j}}\}}}. \end{matrix}\]

With the independence of $X_j^{(1)}, \ldots ,X_j^{(n + l)}$ for different j, the required result readily follows.

From Theorem 2.3, the relationship between multi-state survival signatures of multi-state systems of sizes n and $n + l$ can be established as follows.

Theorem 2.4. Let $\boldsymbol{S} = ({\boldsymbol{S}^{(0)}}, \ldots ,{\boldsymbol{S}^{(M)}})$, where ${\boldsymbol{S}^{(i)}} = ({S_{{i_1}, \ldots ,{i_M}}^{(i)},0 \le {i_M} \le \cdots \le {i_1} \le n} )$ $(i = 0,1, \ldots ,M)$, be the multi-state survival signature of a multi-state coherent or mixed system with n i.i.d. multi-state components and a state space $\Omega = \{ 0, \ldots ,M\}$ for both the system and the components. Suppose the component lifetimes $X_j^{(1)}, \ldots ,X_j^{(n)}$ $(j = 1, \ldots ,M)$ are i.i.d. with a common absolutely continuous distribution ${F_j}(x),\textrm{ }x \ge 0$, and are independent for different $j$. Then, its equivalent system of size $n + l$ has its multi-state survival signature as

$${\boldsymbol{S}^{[l]\ast }} = ({\boldsymbol{S}^{[l]\ast (0)}}, \ldots ,{\boldsymbol{S}^{[l]\ast (M)}}) = \sum\limits_{\boldsymbol{k} \in \mathscr{K}} {{s_{\boldsymbol{k}}}\boldsymbol{S}_{\boldsymbol{k}:n}^{[l]\ast }} ,$$

where ${s_{\boldsymbol{k}}},\boldsymbol{k} \in \mathscr{K}$, are as in Theorem 2.2 and

$$\boldsymbol{S}_{\boldsymbol{k}:n}^{[l]\ast } = (\boldsymbol{S}_{n + l}^{(0)},\boldsymbol{S}_{\boldsymbol{k}:n}^{[l]\ast (1)}, \ldots ,\boldsymbol{S}_{\boldsymbol{k}:n}^{[l]\ast (M)})$$

is the multi-state survival signature of the equivalent system of size l of a multi-state $\boldsymbol{k}$-out-of- $n$:$G$ system given by $\boldsymbol{S}_{\boldsymbol{k}:n}^{[l]\ast (i)} = ({S_{\boldsymbol{k};{i_1}, \ldots ,{i_M}}^{[l]\ast (i)},0 \le {i_M} \le \cdots \le {i_1} \le n + l} )$ $(i = 1, \ldots ,M)$ with ${r_j}$ $(j = 1, \ldots ,M)$ being the number of zeros in ${k_{1,j}}, \ldots ,{k_{M,j}}$ and

\begin{align*} S_{\boldsymbol{k};{i_1}, \ldots ,{i_M}}^{[l]\ast (i)} & = \sum\limits_{\boldsymbol{h} \in {\mathscr{H}_{\boldsymbol{k}}}}\!{\prod\limits_{j = 1}^M {{{\left\{ {{{\left( {\begin{matrix} {n + l}\\ n \end{matrix}} \right)}^{ - 1}}\!\left( {\begin{matrix} {{h_{{r_j} + 1,j}} - 1}\\ {{k_{{r_j} + 1,j}} - 1} \end{matrix}} \right)\!\left[ {\prod\limits_{l = {r_j} + 1}^{M - 1} {{{\left( {\begin{matrix} {{h_{l + 1,j}} - {h_{l,j}} - 1}\\ {{k_{l + 1,j}} - {k_{l,j}} - 1} \end{matrix}} \right)}^{{I_{\{ {k_{l + 1,j}} > {k_{l,j}}\} }}}}} } \right]\!\left( {\begin{matrix} {n + l - {h_{M,j}}}\\ {n - {k_{M,j}}} \end{matrix}} \right)} \right\}}^{{I_{\{ {r_j} < M\} }}}}} } \\ & \quad \times {I_{\{ {i_1} \ge {h_{i,1}}, \ldots ,{i_M} \ge {h_{i,M}}\} }},\,0 \le {i_M} \le \cdots \le {i_1} \le n + l, \end{align*}

and

\begin{align*} {\mathscr{H}_{\boldsymbol{k}}} & = \{ ({h_{i,j}},j = 1, \ldots ,M,i = {r_j} + 1, \ldots ,M):1 \le {h_{{r_j} + 1,j}} \le \cdots \le {h_{M,j}} \le n + l, \\ & \quad {k_{{r_j} + 1,j}} \le {h_{{r_j} + 1,j}},{k_{{r_j} + 2,j}} - {k_{{r_j} + 1,j}} \le {h_{{r_j} + 2,j}} - {h_{{r_j} + 1,j}}, \ldots ,{k_{M,j}} - {k_{M - 1,j}} \le {h_{M,j}} - {h_{M - 1,j}}\textrm{,} \\ & \quad {h_{M,j}} \le {k_{M,j}} + l,{I_{\{ {k_{{r_j} + 2,j}} > {k_{{r_j} + 1,j}}\} }}\textrm{ = }{I_{\{ {h_{{r_j} + 2,j}} > {h_{{r_j} + 1,j}}\} }}\textrm{,} \ldots \textrm{,}{I_{\{ {k_{M,j}} > {k_{M - 1,j}}\} }} = {I_{\{ {h_{M,j}} > {h_{M - 1,j}}\}}}\textrm{ for all }j\} . \end{align*}

Proof. Note that, for $0 \le {k_{1,j}} \le \cdots \le {k_{M,j}} \le n$ $(j = 1, \ldots ,M)$, as before, if the lifetimes of a multi-state $\boldsymbol{k}$-out-of-$n$:$G$ system are denoted by

$${T_i} = \min (X_1^{(n + 1 - {k_{i,1}}:n)}, \ldots ,X_M^{(n + 1 - {k_{i,M}}:n)}),\textrm{ }i = 1, \ldots ,M,$$

then the lifetimes of its equivalent system of size $n + l$ can be denoted as

$${T_i} = \min (X_1^{(n + l + 1 - {h_{i, 1}}:n + l)}, \ldots ,X_M^{(n + l + 1 - {h_{i,M}}:n + l)}),\textrm{ }i = 1, \ldots ,M,$$

with probability

$$\prod\limits_{j = 1}^M {{{\left\{ {{{\left( {\begin{matrix} {n + l}\\ n \end{matrix}} \right)}^{ - 1}}\left( {\begin{matrix} {{h_{{r_j} + 1,j}} - 1}\\ {{k_{{r_j} + 1,j}} - 1} \end{matrix}} \right)\left[ {\prod\limits_{l = {r_j} + 1}^{M - 1} {{{\left( {\begin{matrix} {{h_{l + 1,j}} - {h_{l,j}} - 1}\\ {{k_{l + 1,j}} - {k_{l,j}} - 1} \end{matrix}} \right)}^{{I_{\{ {k_{l + 1,j}} > {k_{l,j}}\} }}}}} } \right]\left( {\begin{matrix} {n + l - {h_{M,j}}}\\ {n - {k_{M,j}}} \end{matrix}} \right)} \right\}}^{{I_{\{ {r_j} < M\} }}}}}$$

for all $\boldsymbol{h} \in {\mathscr{H}_{\boldsymbol{k}}}$. The rest of the proof proceeds similar to that of Theorem 2.2, and is therefore omitted here for brevity.

Remark 2.4. Specifically, for $M = 2,\textrm{ }n = 4$ and $l = 2$, we have $\boldsymbol{S}_{\boldsymbol{k}:4}^{[2]\ast (i)} = ({S_{\boldsymbol{k};{i_1},{i_2}}^{[2]\ast (i)}, \ 0 \le {i_2}} \le {i_1} { \le 6} )$ $(i = 1,2)$, where

$$S_{\boldsymbol{k},{i_1},{i_2}}^{[2]\ast (i)} = \sum\limits_{\boldsymbol{h} \in {\mathscr{H}_{\boldsymbol{k}}}} {\prod\limits_{j = 1}^2 {{{\left\{ {\frac{1}{{15}}{{\left( {\begin{matrix} {{h_{1,j}} - 1}\\ {{k_{1,j}} - 1} \end{matrix}} \right)}^{{I_{\{ {k_{1,j}} > 0\} }}}}{{\left( {\begin{matrix} {{h_{2,j}} - {h_{1,j}} - 1}\\ {{k_{2,j}} - {k_{1,j}} - 1} \end{matrix}} \right)}^{{I_{\{ {k_{2,j}} > {k_{1,j}}\} }}}}\left( {\begin{matrix} {6 - {h_{2,j}}}\\ {4 - {k_{2,j}}} \end{matrix}} \right)} \right\}}^{{I_{\{ {k_{2,j}} > 0\} }}}}{I_{\{ {i_1} \ge {h_{i,1}},{i_2} \ge {h_{i,2}}\} }}} } ,$$

with $0 \le {i_2} \le {i_1} \le 6$ and

\begin{align*} {\mathscr{H}_{\boldsymbol{k}}} & = \{ ({h_{1,1}},{h_{1,2}};{h_{2,1}},{h_{2,2}}):0 \le {h_{1,j}} \le {h_{2,j}} \le 6,{k_{1,j}} \le {h_{1,j}},{k_{2,j}} - {k_{1,j}} \le {h_{2,j}} - {h_{1,j}},\\ & \quad {h_{2,j}} \le {k_{2,j}} + 2,{I_{\{ {k_{2,j}} > {k_{1,j}}\} }} = {I_{\{ {h_{2,j}} > {h_{1,j}}\} }}\,\textrm{for}\,\textrm{all}\,j = 1,2\} . \end{align*}

3. Illustrative examples

For illustrating the results established in the last section, let us consider a wireless sensor system with four i.i.d. sensors and a multi-state linear consecutive $(2,1)$-out-of-$4:G$ structure, which works (perfectly or imperfectly) if and only if there are at least two consecutive sensors working (perfectly or imperfectly), works perfectly if and only if there is also at least one sensor working perfectly, and fails if it does not work. According to Yi et al. [Reference Yi, Balakrishnan and Cui27], the multi-state survival signature of such a system is given by

\begin{align*} \boldsymbol{S} & = ({\boldsymbol{S}^{(0)}},{\boldsymbol{S}^{(1)}},{\boldsymbol{S}^{(2)}})\\ & = {\left( {\begin{matrix} {S_{0,0}^{(0)}}& {S_{1,0}^{(0)}}& {S_{2,0}^{(0)}}& {S_{3,0}^{(0)}}& {S_{4,0}^{(0)}}& {S_{1,1}^{(0)}}& {S_{2,1}^{(0)}}& {S_{3,1}^{(0)}}& {S_{4,1}^{(0)}}& {S_{2,2}^{(0)}}& {S_{3,2}^{(0)}}& {S_{4,2}^{(0)}}& {S_{3,3}^{(0)}}& {S_{4,3}^{(0)}}& {S_{4,4}^{(0)}}\\[6pt] {S_{0,0}^{(1)}}& {S_{1,0}^{(1)}}& {S_{2,0}^{(1)}}& {S_{3,0}^{(1)}}& {S_{4,0}^{(1)}}& {S_{1,1}^{(1)}}& {S_{2,1}^{(1)}}& {S_{3,1}^{(1)}}& {S_{4,1}^{(1)}}& {S_{2,2}^{(1)}}& {S_{3,2}^{(1)}}& {S_{4,2}^{(1)}}& {S_{3,3}^{(1)}}& {S_{4,3}^{(1)}}& {S_{4,4}^{(1)}}\\[6pt] {S_{0,0}^{(2)}}& {S_{1,0}^{(2)}}& {S_{2,0}^{(2)}}& {S_{3,0}^{(2)}}& {S_{4,0}^{(2)}}& {S_{1,1}^{(2)}}& {S_{2,1}^{(2)}}& {S_{3,1}^{(2)}}& {S_{4,1}^{(2)}}& {S_{2,2}^{(2)}}& {S_{3,2}^{(2)}}& {S_{4,2}^{(2)}}& {S_{3,3}^{(2)}}& {S_{4,3}^{(2)}}& {S_{4,4}^{(2)}} \end{matrix}} \right)^T}\\ & = {\left( {\begin{matrix} 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1\\ 0& 0& {1/2}& 1& 1& 0& {1/2}& 1& 1& {1/2}& 1& 1& 1& 1& 1\\ 0& 0& 0& 0& 0& 0& {1/2}& 1& 1& {1/2}& 1& 1& 1& 1& 1 \end{matrix}} \right)^T}. \end{align*}

Suppose the sensor lifetimes are independent for different state levels. Then, the multi-state survival signatures of its equivalent systems of sizes $5$ and $6$ are presented in Examples 3.1 and 3.2, respectively, by the use of Theorem 2.2, and the latter is also worked out in Example 3.3 by the use of Theorem 2.4.

Example 3.1. For such a multi-state linear consecutive $(2,1)$-out-of-$4:G$ wireless sensor system, according to Remark 2.3, the multi-state survival signature of its equivalent system of size $5$ can be given as ${\boldsymbol{S}^\ast } = \sum\nolimits_{\boldsymbol{k} \in \mathscr{K}} {{s_{\boldsymbol{k}}}\boldsymbol{S}_{\boldsymbol{k}:4}^\ast }$, where ${s_{\boldsymbol{k}}},\boldsymbol{k} \in \mathscr{K}$, with

$$\mathscr{K} = \{ ({k_{i,j}},i = 1,2,j = 1,2):0 \le {k_{1,2}} \le {k_{1,1}},{k_{2,2}} \le {k_{2,1}} \le 4\} ,$$

are given by the following marginal distributions:

$${\boldsymbol{s}^{(1)}} = {\boldsymbol{M}^{ - 1}}{\boldsymbol{S}^{(1)}} = {\left( {0,0,\frac{1}{2},\frac{1}{2},0,0,0,0,0,0,0,0,0,0,0} \right)^T},$$
$${\boldsymbol{s}^{(2)}} = {\boldsymbol{M}^{ - 1}}{\boldsymbol{S}^{(2)}} = {\left( {0,0,0,0,0,0,\frac{1}{2},\frac{1}{2},0,0,0,0,0,0,0} \right)^T}.$$

These imply $s_{2,0}^{(1)} = s_{3,0}^{(1)} = 1/2$, $s_{{k_1},{k_2}}^{(1)} = 0$ for other $0 \le {k_2} \le {k_1} \le 4$, and $s_{2,1}^{(2)} = s_{3,1}^{(2)} = 1/2$, $s_{{k_1},{k_2}}^{(2)} = 0$ for other $0 \le {k_2} \le {k_1} \le 4$, which leads to the fact that ${s_{\boldsymbol{k}}} = 0$ for all $\boldsymbol{k} \in \mathscr{K}$ except ${s_{2,0;2,1}},{s_{2,0;3,1}},{s_{3,0;3,1}}$. Now, upon solving the set of equations

$$\left\{ \begin{array}{l} {s_{2,0;2,1}} + {s_{2,0;3,1}} = s_{2,0}^{(1)} = 1/2,\\[3pt] {s_{3,0;3,1}} = s_{3,0}^{(1)} = 1/2,\\[3pt] {s_{2,0;2,1}} = s_{2,1}^{(2)} = 1/2,\\[3pt] {s_{2,0;3,1}} + {s_{3,0;3,1}} = s_{3,1}^{(2)} = 1/2, \end{array} \right.$$

we get ${s_{2,0;2,1}} = {s_{3,0;3,1}} = 1/2$ and ${s_{2,0;3,1}} = 0$. We then have

$${\boldsymbol{S}^\ast } = \frac{1}{2}\boldsymbol{S}_{2,0;2,1:4}^\ast{+} \frac{1}{2}\boldsymbol{S}_{3,0;3,1:4}^\ast ,$$

where $\boldsymbol{S}_{2,0;2,1:4}^\ast{=} {(\boldsymbol{S}_5^{(0)},\boldsymbol{S}_{2,0:4}^{{\ast} (1)},\boldsymbol{S}_{2,1:4}^{{\ast} (2)})^T}$ and $\boldsymbol{S}_{3,0;3,1:4}^\ast{=} {(\boldsymbol{S}_5^{(0)},\boldsymbol{S}_{3,0:4}^{{\ast} (1)},\boldsymbol{S}_{3,1:4}^{{\ast} (2)})^T}$, with

\begin{align*} \boldsymbol{S}_{2,0:4}^{{\ast} (1)} & = \dfrac{{3 \times 5}}{{25}}\boldsymbol{S}_{2,0:5}^{(1)} + \dfrac{{3 \times 0}}{{25}}\boldsymbol{S}_{2,1:5}^{(1)} + \dfrac{{2 \times 5}}{{25}}\boldsymbol{S}_{3,0:5}^{(1)} + \dfrac{{2 \times 0}}{{25}}\boldsymbol{S}_{3,1:5}^{(1)},\\ \boldsymbol{S}_{2,1:4}^{{\ast} (2)} & = \dfrac{{3 \times 4}}{{25}}\boldsymbol{S}_{2,1:5}^{(2)} + \dfrac{{3 \times 1}}{{25}}\boldsymbol{S}_{2,2:5}^{(2)} + \dfrac{{2 \times 4}}{{25}}\boldsymbol{S}_{3,1:5}^{(2)} + \dfrac{{2 \times 1}}{{25}}\boldsymbol{S}_{3,2:5}^{(2)}, \end{align*}
\begin{align*} \boldsymbol{S}_{3,0:4}^{\ast (1)} & = \frac{{2 \times 5}}{{25}}\boldsymbol{S}_{3,0:5}^{(1)} + \frac{{2 \times 0}}{{25}}\boldsymbol{S}_{3,1:5}^{(1)} + \frac{{3 \times 5}}{{25}}\boldsymbol{S}_{4,0:5}^{(1)} + \frac{{3 \times 0}}{{25}}\boldsymbol{S}_{4,1:5}^{(1)},\\ \boldsymbol{S}_{3,1:4}^{\ast (2)} & = \frac{{2 \times 4}}{{25}}\boldsymbol{S}_{3,1:5}^{(2)} + \frac{{2 \times 1}}{{25}}\boldsymbol{S}_{3,2:5}^{(2)} + \frac{{3 \times 4}}{{25}}\boldsymbol{S}_{4,1:5}^{(2)} + \frac{{3 \times 1}}{{25}}\boldsymbol{S}_{4,2:5}^{(2)}. \end{align*}

Then, we clearly have

\begin{align*} {\boldsymbol{S}^{{\ast} (1)}} & = \dfrac{1}{2}\boldsymbol{S}_{2,0:4}^{{\ast} (1)} + \dfrac{1}{2}\boldsymbol{S}_{3,0:4}^{{\ast} (1)} = \dfrac{1}{{50}}({15\boldsymbol{S}_{2,0:5}^{(1)} + 20\boldsymbol{S}_{3,0:5}^{(1)} + 15\boldsymbol{S}_{4,0:5}^{(1)}} ),\\ {\boldsymbol{S}^{{\ast} (2)}} & = \dfrac{1}{2}\boldsymbol{S}_{2,1:4}^{{\ast} (2)} + \dfrac{1}{2}\boldsymbol{S}_{3,1:4}^{{\ast} (2)} = \dfrac{1}{{50}}({12\boldsymbol{S}_{2,1:5}^{(2)} + 3\boldsymbol{S}_{2,2:5}^{(2)} + 16\boldsymbol{S}_{3,1:5}^{(2)} + 4\boldsymbol{S}_{3,2:5}^{(2)} + 12\boldsymbol{S}_{4,1:5}^{(2)} + 3\boldsymbol{S}_{4,2:5}^{(2)}} ); \end{align*}

in other words, the equivalent system of size $5$ of a multi-state linear consecutive $(2,1)$-out-of-$4$:$G$ wireless sensor system has its multi-state survival signature as

\begin{align*} {\boldsymbol{S}^\ast } & = (\boldsymbol{S}_5^{(0)},{\boldsymbol{S}^{{\ast} (1)}},{\boldsymbol{S}^{{\ast} (2)}}) \\ & = \left( {\begin{matrix} 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1\\ {S_{0,0}^{{\ast} (1)}}& {S_{1,0}^{{\ast} (1)}}& {S_{2,0}^{{\ast} (1)}}& {S_{3,0}^{{\ast} (1)}}& {S_{4,0}^{{\ast} (1)}}& {S_{5,0}^{{\ast} (1)}}& {S_{1,1}^{{\ast} (1)}}& {S_{2,1}^{{\ast} (1)}}& {S_{3,1}^{{\ast} (1)}}& {S_{4,1}^{{\ast} (1)}}& {S_{5,1}^{{\ast} (1)}}& {S_{2,2}^{{\ast} (1)}}& {S_{3,2}^{{\ast} (1)}}& {S_{4,2}^{{\ast} (1)}}& {S_{5,2}^{{\ast} (1)}}\\[6pt] {S_{0,0}^{{\ast} (2)}}& {S_{1,0}^{{\ast} (2)}}& {S_{2,0}^{{\ast} (2)}}& {S_{3,0}^{{\ast} (2)}}& {S_{4,0}^{{\ast} (2)}}& {S_{5,0}^{{\ast} (2)}}& {S_{1,1}^{{\ast} (2)}}& {S_{2,1}^{{\ast} (2)}}& {S_{3,1}^{{\ast} (2)}}& {S_{4,1}^{{\ast} (2)}}& {S_{5,1}^{{\ast} (2)}}& {S_{2,2}^{{\ast} (2)}}& {S_{3,2}^{{\ast} (2)}}& {S_{4,2}^{{\ast} (2)}}& {S_{5,2}^{{\ast} (2)}} \end{matrix}}\right.\notag\\ &\qquad \left.{\begin{matrix} 1& 1& 1& 1& 1& 1\\ {S_{3,3}^{{\ast} (1)}}& {S_{4,3}^{{\ast} (1)}}& {S_{5,3}^{{\ast} (1)}}& {S_{4,4}^{{\ast} (1)}}& {S_{5,4}^{{\ast} (1)}}& {S_{5,5}^{{\ast} (1)}}\\[6pt] {S_{3,3}^{{\ast} (2)}}& {S_{4,3}^{{\ast} (2)}}& {S_{5,3}^{{\ast} (2)}}& {S_{4,4}^{{\ast} (2)}}& {S_{5,4}^{{\ast} (2)}}& {S_{5,5}^{{\ast} (2)}} \end{matrix}} \right)^T \\ & = \frac{1}{{50}}{\left( {\begin{matrix} {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}\\ 0& 0& {15}& {35}& {50}& {50}& 0& {15}& {35}& {50}& {50}& {15}& {35}& {50}& {50}& {35}& {50}& {50}& {50}& {50}& {50}\\ 0& 0& 0& 0& 0& 0& 0& {12}& {28}& {40}& {40}& {15}& {35}& {50}& {50}& {35}& {50}& {50}& {50}& {50}& {50} \end{matrix}} \right)^T}. \end{align*}

Example 3.2. For the equivalent system in Example 3.1 with its multi-state survival signature as given above, namely,

\begin{align*} \boldsymbol{S} & = ({\boldsymbol{S}^{(0)}},{\boldsymbol{S}^{(1)}},{\boldsymbol{S}^{(2)}})\\ & = \dfrac{1}{{50}}{\left( {\begin{matrix} {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}\\ 0& 0& {15}& {35}& {50}& {50}& 0& {15}& {35}& {50}& {50}& {15}& {35}& {50}& {50}& {35}& {50}& {50}& {50}& {50}& {50}\\ 0& 0& 0& 0& 0& 0& 0& {12}& {28}& {40}& {40}& {15}& {35}& {50}& {50}& {35}& {50}& {50}& {50}& {50}& {50} \end{matrix}} \right)^T}, \end{align*}

according to Remark 2.3, the multi-state survival signature of its equivalent system of size $6$ can be given as ${\boldsymbol{S}^\ast } = \sum\nolimits_{\boldsymbol{k} \in \mathscr{K}} {{s_{\boldsymbol{k}}}\boldsymbol{S}_{\boldsymbol{k}:5}^\ast }$, where ${s_{\boldsymbol{k}}},\boldsymbol{k} \in \mathscr{K}$, with

$$\mathscr{K} = \{ ({k_{i,j}},i = 1,2,j = 1,2):0 \le {k_{1,2}} \le {k_{1,1}},{k_{2,2}} \le {k_{2,1}} \le 5\} ,$$

are given by the marginal distributions as follows:

\[\begin{matrix} {\boldsymbol{s}^{(1)}} = {\boldsymbol{M}^{ - 1}}{\boldsymbol{S}^{(1)}} = {\left( {0,0,\dfrac{3}{{10}},\dfrac{2}{5},\dfrac{3}{{10}},0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0} \right)^T},\\ {\boldsymbol{s}^{(2)}} = {\boldsymbol{M}^{ - 1}}{\boldsymbol{S}^{(2)}} = {\left( {0,0,0,0,0,0,0,\dfrac{6}{{25}},\dfrac{8}{{25}},\dfrac{6}{{25}},0,\dfrac{3}{{50}},\dfrac{2}{{25}},\dfrac{3}{{50}},0,0,0,0,0,0,0} \right)^T}. \end{matrix}\]

These imply $s_{2,0}^{(1)} = 3/10,s_{3,0}^{(1)} = 2/5,s_{4,0}^{(1)} = 3/10$, $s_{{k_1},{k_2}}^{(1)} = 0$ for other $0 \le {k_2} \le {k_1} \le 5$, and

$$s_{2,1}^{(2)} = \frac{6}{{25}},s_{3,1}^{(2)} = \frac{8}{{25}},s_{4,1}^{(2)} = \frac{6}{{25}},s_{2,2}^{(2)} = \frac{3}{{50}},s_{3,2}^{(2)} = \frac{2}{{25}},s_{4,2}^{(2)} = \frac{3}{{50}},$$

$s_{{k_1},{k_2}}^{(2)} = 0$ for other $0 \le {k_2} \le {k_1} \le 5$. Now, upon solving the set of equations

$$\left\{ \begin{array}{l} {s_{2,0;2,1}} + {s_{2,0;3,1}} + {s_{2,0;4,1}} + {s_{2,0;2,2}} + {s_{2,0;3,2}} + {s_{2,0;4,2}} = s_{2,0}^{(1)} = 3/10,\\ {s_{3,0;3,1}} + {s_{3,0;4,1}} + {s_{3,0;3,2}} + {s_{3,0;4,2}} = s_{3,0}^{(1)} = 2/5,\\ {s_{4,0;4,1}} + {s_{4,0;4,2}} = s_{4,0}^{(1)} = 3/10,\\ {s_{2,0;2,1}} = s_{2,1}^{(2)} = 6/25,\\ {s_{2,0;3,1}} + {s_{3,0;3,1}} = s_{3,1}^{(2)} = 8/25,\\ {s_{2,0;4,1}} + {s_{3,0;4,1}} + {s_{4,0;4,1}} = s_{4,1}^{(2)} = 6/25,\\ {s_{2,0;2,2}} = s_{2,2}^{(2)} = 3/50,\\ {s_{2,0;3,2}} + {s_{3,0;3,2}} = s_{3,2}^{(2)} = 2/25,\\ {s_{2,0;4,2}} + {s_{3,0;4,2}} + {s_{4,0;4,2}} = s_{4,2}^{(2)} = 3/50, \end{array} \right.$$

we get

$${s_{2,0;2,1}} = \frac{6}{{25}},{s_{3,0;3,1}} = \frac{8}{{25}},{s_{4,0;4,1}} = \frac{6}{{25}},{s_{2,0;2,2}} = \frac{3}{{50}},{s_{3,0;3,2}} = \frac{2}{{25}},{s_{4,0;4,2}} = \frac{3}{{50}},$$

and ${s_{\boldsymbol{k}}} = 0$ for other $\boldsymbol{k} \in \mathscr{K}$. We then have

$${\boldsymbol{S}^\ast } = \frac{6}{{25}}\boldsymbol{S}_{2,0;2,1:5}^\ast{+} \frac{8}{{25}}\boldsymbol{S}_{3,0;3,1:5}^\ast{+} \frac{6}{{25}}\boldsymbol{S}_{4,0;4,1:5}^\ast{+} \frac{3}{{50}}\boldsymbol{S}_{2,0;2,2:5}^\ast{+} \frac{2}{{25}}\boldsymbol{S}_{3,0;3,2:5}^\ast{+} \frac{3}{{50}}\boldsymbol{S}_{4,0;4,2:5}^\ast ,$$

where $\boldsymbol{S}_{\boldsymbol{k}:5}^\ast{=} {(\boldsymbol{S}_6^{(0)},\boldsymbol{S}_{{k_{1,1}},{k_{1,2}}:5}^{{\ast} (1)},\boldsymbol{S}_{{k_{2,1}},{k_{2,2}}:5}^{{\ast} (2)})^T}$, with

\begin{align*} S_{{\boldsymbol{k}_i},{i_1},{i_2}}^{{\ast} (i)} & = \dfrac{{(6 - {k_{i,1}})(6 - {k_{i,2}})}}{{36}}{I_{\{ {i_1} \ge {k_{i,1}},{i_2} \ge {k_{i,2}}\} }} + \dfrac{{(6 - {k_{i,1}}){k_{i,2}}}}{{36}}{I_{\{ {i_1} \ge {k_{i,1}},{i_2} \ge {k_{i,2}} + 1\} }} \\ & \quad + \dfrac{{{k_{i,1}}(6 - {k_{i,2}})}}{{36}}{I_{\{ {i_1} \ge {k_{i,1}} + 1,{i_2} \ge {k_{i,2}}\} }} + \dfrac{{{k_{i,1}}{k_{i,2}}}}{{36}}{I_{\{ {i_1} \ge {k_{i,1}} + 1,{i_2} \ge {k_{i,2}} + 1\} }}, \\ & \quad \quad \,0 \le {i_2} \le {i_1} \le 6,\textrm{ }i = 1,2. \end{align*}

Then, we clearly have

\begin{align*} {\boldsymbol{S}^{{\ast} (1)}} & = \dfrac{3}{{10}}\boldsymbol{S}_{2,0:5}^{{\ast} (1)} + \dfrac{2}{5}\boldsymbol{S}_{3,0:5}^{{\ast} (1)} + \dfrac{3}{{10}}\boldsymbol{S}_{4,0:5}^{{\ast} (1)} = \dfrac{1}{5}\boldsymbol{S}_{2,0:5}^{(1)} + \dfrac{3}{{10}}\boldsymbol{S}_{3,0:5}^{(1)} + \dfrac{3}{{10}}\boldsymbol{S}_{4,0:5}^{(1)} + \dfrac{1}{5}\boldsymbol{S}_{5,0:5}^{(1)}, \\ {\boldsymbol{S}^{{\ast} (2)}} & = \dfrac{6}{{25}}\boldsymbol{S}_{2,1:5}^{{\ast} (2)} + \dfrac{8}{{25}}\boldsymbol{S}_{3,1:5}^{{\ast} (2)} + \dfrac{6}{{25}}\boldsymbol{S}_{4,1:5}^{{\ast} (2)} + \dfrac{3}{{50}}\boldsymbol{S}_{2,2:5}^{{\ast} (2)} + \dfrac{2}{{25}}\boldsymbol{S}_{3,2:5}^{{\ast} (2)} + \dfrac{3}{{50}}\boldsymbol{S}_{4,2:5}^{{\ast} (2)}\\ & = \dfrac{2}{{15}}\boldsymbol{S}_{2,1:5}^{(2)} + \dfrac{4}{{75}}\boldsymbol{S}_{2,2:5}^{(2)} + \dfrac{1}{5}\boldsymbol{S}_{3,1:5}^{(2)} + \dfrac{2}{{25}}\boldsymbol{S}_{3,2:5}^{(2)} + \dfrac{1}{5}\boldsymbol{S}_{4,1:5}^{(2)} + \dfrac{2}{{25}}\boldsymbol{S}_{4,2:5}^{(2)} + \dfrac{2}{{15}}\boldsymbol{S}_{5,1:5}^{(2)}\\ & \quad + \dfrac{4}{{75}}\boldsymbol{S}_{5,2:5}^{(2)} + \dfrac{1}{{75}}\boldsymbol{S}_{2,3:5}^{(2)} + \dfrac{1}{{50}}\boldsymbol{S}_{3,3:5}^{(2)} + \dfrac{1}{{50}}\boldsymbol{S}_{4,3:5}^{(2)} + \dfrac{1}{{75}}\boldsymbol{S}_{5,3:5}^{(2)}; \end{align*}

that is, the equivalent system of size $6$ of a multi-state linear consecutive $(2,1)$-out-of-$4$:$G$ wireless sensor system has its multi-state survival signature as

\begin{align*} {\boldsymbol{S}^\ast } & = (\boldsymbol{S}_6^{( 0 )},{\boldsymbol{S}^{{\ast} (1)}},{\boldsymbol{S}^{{\ast} (2)}}) \\ & = {\left( {\begin{matrix} 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1\\ 0& 0& {\dfrac{1}{5}}& {\dfrac{1}{2}}& {\dfrac{4}{5}}& 1& 1& 0& {\dfrac{1}{5}}& {\dfrac{1}{2}}& {\dfrac{4}{5}}& 1& 1& {\dfrac{1}{5}}& {\dfrac{1}{2}}& {\dfrac{4}{5}}& 1& 1& {\dfrac{1}{2}}& {\dfrac{4}{5}}& 1& 1& {\dfrac{4}{5}}& 1& 1& 1& 1& 1\\ 0& 0& 0& 0& 0& 0& 0& 0& {\dfrac{2}{{15}}}& {\dfrac{1}{3}}& {\dfrac{8}{{15}}}& {\dfrac{2}{3}}& {\dfrac{2}{3}}& {\dfrac{{14}}{{75}}}& {\dfrac{7}{{15}}}& {\dfrac{{56}}{{75}}}& {\dfrac{{14}}{{15}}}& {\dfrac{{14}}{{15}}}& {\dfrac{1}{2}}& {\dfrac{4}{5}}& 1& 1& {\dfrac{4}{5}}& 1& 1& 1& 1& 1 \end{matrix}} \right)^T}. \end{align*}

Example 3.3. For the multi-state linear consecutive $(2,1)$-out-of-$4$:$G$ wireless sensor system in Example 3.1, according to Theorem 2.4, the multi-state survival signature of its equivalent system of size $6$ can be directly given as ${\boldsymbol{S}^{[2]\ast }} = \sum\nolimits_{\boldsymbol{k} \in \textbf{K}} {{s_{\boldsymbol{k}}}\boldsymbol{S}_{\boldsymbol{k}:4}^{[2]\ast }}$, with ${s_{\boldsymbol{k}}},\boldsymbol{k} \in \mathscr{K}$, being the same as in Example 3.1. Then, we have

$${\boldsymbol{S}^{[2]\ast }} = \frac{1}{2}\boldsymbol{S}_{2,0;2,1:4}^{[2]\ast } + \frac{1}{2}\boldsymbol{S}_{3,0;3,1:4}^{[2]\ast },$$

where $\boldsymbol{S}_{2,0;2,1:4}^{[2]\ast } = {(\boldsymbol{S}_6^{(0)},\boldsymbol{S}_{2,0;2,1:4}^{[2]\ast (1)},\boldsymbol{S}_{2,0;2,1:4}^{[2]\ast (2)})^T}$ and $\boldsymbol{S}_{3,0;3,1:4}^{[2]\ast } = {(\boldsymbol{S}_6^{(0)},\boldsymbol{S}_{3,0;3,1:4}^{[2]\ast (1)},\boldsymbol{S}_{3,1;3,1:4}^{[2]\ast (2)})^T}$ are

\begin{align*} \boldsymbol{S}_{2,0;2,1:4}^{[2]\ast } & = \dfrac{{6 \times 10}}{{225}}{\boldsymbol{S}_{2,0;2,1:6}} + \dfrac{{6 \times 4}}{{225}}{\boldsymbol{S}_{2,0;2,2:6}} + \dfrac{{6 \times 1}}{{225}}{\boldsymbol{S}_{2,0;2,3:6}}\\ & \quad + \dfrac{{6 \times 10}}{{225}}{\boldsymbol{S}_{3,0;3,1:6}} + \dfrac{{6 \times 4}}{{225}}{\boldsymbol{S}_{3,0;3,2:6}} + \dfrac{{6 \times 1}}{{225}}{\boldsymbol{S}_{3,0;3,3:6}}\\ & \quad + \dfrac{{3 \times 10}}{{225}}{\boldsymbol{S}_{4,0;4,1:6}} + \dfrac{{3 \times 4}}{{225}}{\boldsymbol{S}_{4,0;4,2:6}} + \dfrac{{3 \times 1}}{{225}}{\boldsymbol{S}_{4,0;4,3:6}},\\ \boldsymbol{S}_{3,0;3,1:4}^{[2]\ast } & = \dfrac{{3 \times 10}}{{225}}{\boldsymbol{S}_{3,0;3,1:6}} + \dfrac{{3 \times 4}}{{225}}{\boldsymbol{S}_{3,0;3,2:6}} + \dfrac{{3 \times 1}}{{225}}{\boldsymbol{S}_{3,0;3,3:6}}\\ & \quad + \dfrac{{6 \times 10}}{{225}}{\boldsymbol{S}_{4,0;4,1:6}} + \dfrac{{6 \times 4}}{{225}}{\boldsymbol{S}_{4,0;4,2:6}} + \dfrac{{6 \times 1}}{{225}}{\boldsymbol{S}_{4,0;4,3:6}}\\ & \quad + \dfrac{{6 \times 10}}{{225}}{\boldsymbol{S}_{5,0;5,1:6}} + \dfrac{{6 \times 4}}{{225}}{\boldsymbol{S}_{5,0;5,2:6}} + \dfrac{{6 \times 1}}{{225}}{\boldsymbol{S}_{5,0;5,3:6}}. \end{align*}

Note that $\boldsymbol{S}_{2,0;2,3:6}^{(1)} = \boldsymbol{S}_{2,0;3,3:6}^{(1)}$. Then, we clearly have

\begin{align*} {\boldsymbol{S}^{[2]\ast (1)}} & = \dfrac{1}{2}\boldsymbol{S}_{2,0;2,1:4}^{[2]\ast (1)} + \dfrac{1}{2}\boldsymbol{S}_{3,0;3,1:4}^{[2]\ast (1)} = \dfrac{1}{5}\boldsymbol{S}_{2,0:6}^{(1)} + \dfrac{3}{{10}}\boldsymbol{S}_{3,0:6}^{(1)} + \dfrac{3}{{10}}\boldsymbol{S}_{4,0:6}^{(1)} + \dfrac{1}{5}\boldsymbol{S}_{5,0:6}^{(1)},\\ {\boldsymbol{S}^{[2]\ast (2)}} & = \dfrac{1}{2}\boldsymbol{S}_{2,0;2,1:4}^{[2]\ast (2)} + \dfrac{1}{2}\boldsymbol{S}_{3,0;3,1:4}^{[2]\ast (2)}\\ & = \dfrac{2}{{15}}\boldsymbol{S}_{2,1:6}^{(1)} + \dfrac{4}{{75}}\boldsymbol{S}_{2,2:6}^{(1)} + \dfrac{1}{{75}}\boldsymbol{S}_{2,3:6}^{(1)} + \dfrac{1}{5}\boldsymbol{S}_{3,1:6}^{(1)} + \dfrac{2}{{25}}\boldsymbol{S}_{3,2:6}^{(1)} + \dfrac{1}{{50}}\boldsymbol{S}_{3,3:6}^{(1)}\\ & \quad + \dfrac{1}{5}\boldsymbol{S}_{4,1:6}^{(1)} + \dfrac{2}{{25}}\boldsymbol{S}_{4,2:6}^{(1)} + \dfrac{1}{{50}}\boldsymbol{S}_{4,3:6}^{(1)} + \dfrac{2}{{15}}\boldsymbol{S}_{5,1:6}^{(1)} + \dfrac{4}{{75}}\boldsymbol{S}_{5,2:6}^{(1)} + \dfrac{1}{{75}}\boldsymbol{S}_{5,3:6}^{(1)}; \end{align*}

that is, the equivalent system of size $6$ of a multi-state linear consecutive $(2,1)$-out-of-$4$:$G$ wireless sensor system has its multi-state survival signature as

\begin{align*} {\boldsymbol{S}^{[2]\ast }} & = (\boldsymbol{S}_6^{(0)},{\boldsymbol{S}^{[2]\ast (1)}},{\boldsymbol{S}^{[2]\ast (2)}})\\ & = {\left( {\begin{matrix} 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1\\[6pt] 0& 0& {\dfrac{1}{5}}& {\dfrac{1}{2}}& {\dfrac{4}{5}}& 1& 1& 0& {\dfrac{1}{5}}& {\dfrac{1}{2}}& {\dfrac{4}{5}}& 1& 1& {\dfrac{1}{5}}& {\dfrac{1}{2}}& {\dfrac{4}{5}}& 1& 1& {\dfrac{1}{2}}& {\dfrac{4}{5}}& 1& 1& {\dfrac{4}{5}}& 1& 1& 1& 1& 1\\[9pt] 0& 0& 0& 0& 0& 0& 0& 0& {\dfrac{2}{{15}}}& {\dfrac{1}{3}}& {\dfrac{8}{{15}}}& {\dfrac{2}{3}}& {\dfrac{2}{3}}& {\dfrac{{14}}{{75}}}& {\dfrac{7}{{15}}}& {\dfrac{{56}}{{75}}}& {\dfrac{{14}}{{15}}}& {\dfrac{{14}}{{15}}}& {\dfrac{1}{2}}& {\dfrac{4}{5}}& 1& 1& {\dfrac{4}{5}}& 1& 1& 1& 1& 1 \end{matrix}} \right)^T}, \end{align*}

which is exactly the same as ${\boldsymbol{S}^\ast }$ obtained earlier in Example 3.2, as we would expect.

4. Comparison of systems of different sizes

In the last section, we considered a wireless sensor system with four sensors and provided multi-state survival signatures of it and its equivalent systems of sizes $5$ and $6$. In this section, two different wireless sensor system structures of sizes $5$ and $6$ will be discussed in Examples 4.1 and 4.2, respectively, for demonstrating the use of previously established results in comparing multi-state coherent or mixed systems with different numbers of i.i.d. multi-state components.

Example 4.1. Consider a wireless sensor system with five i.i.d. sensors and a multi-state linear consecutive $(3,2)$-out-of-$5$:$G$ structure with sparse $(0,1)$, which works (perfectly or imperfectly) if and only if there are at least three consecutive sensors work (perfectly or imperfectly), works perfectly if and only if there are also at least two consecutive sensors with sparse $1$ working perfectly, and fails if it does not work. As in Example 3 of Yi et al. [Reference Yi, Balakrishnan and Cui27], the multi-state survival signature of such a system can be shown to be

\begin{align*} \boldsymbol{S} & = (\boldsymbol{S}_5^{(0)},{\boldsymbol{S}^{(1)}},{\boldsymbol{S}^{(2)}})\\ & = {\left( {\begin{matrix} 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1\\[6pt] {S_{0,0}^{(1)}}& {S_{1,0}^{(1)}}& {S_{2,0}^{(1)}}& {S_{3,0}^{(1)}}& {S_{4,0}^{(1)}}& {S_{5,0}^{(1)}}& {S_{1,1}^{(1)}}& {S_{2,1}^{(1)}}& {S_{3,1}^{(1)}}& {S_{4,1}^{(1)}}& {S_{5,1}^{(1)}}& {S_{2,2}^{(1)}}& {S_{3,2}^{(1)}}& {S_{4,2}^{(1)}}& {S_{5,2}^{(1)}}& {S_{3,3}^{(1)}}& {S_{4,3}^{(1)}}& {S_{5,3}^{(1)}}& {S_{4,4}^{(1)}}& {S_{5,4}^{(1)}}& {S_{5,5}^{(1)}}\\[6pt] {S_{0,0}^{(2)}}& {S_{1,0}^{(2)}}& {S_{2,0}^{(2)}}& {S_{3,0}^{(2)}}& {S_{4,0}^{(2)}}& {S_{5,0}^{(2)}}& {S_{1,1}^{(2)}}& {S_{2,1}^{(2)}}& {S_{3,1}^{(2)}}& {S_{4,1}^{(2)}}& {S_{5,1}^{(2)}}& {S_{2,2}^{(2)}}& {S_{3,2}^{(2)}}& {S_{4,2}^{(2)}}& {S_{5,2}^{(2)}}& {S_{3,3}^{(2)}}& {S_{4,3}^{(2)}}& {S_{5,3}^{(2)}}& {S_{4,4}^{(2)}}& {S_{5,4}^{(2)}}& {S_{5,5}^{(2)}} \end{matrix}} \right)^T}\\ & = \frac{1}{{10}}{\left( {\begin{matrix} {10}& {10}& {10}& {10}& {10}& {10}& {10}& {10}& {10}& {10}& {10}& {10}& {10}& {10}& {10}& {10}& {10}& {10}& {10}& {10}& {10}\\ 0& 0& 0& 3& 8& {10}& 0& 0& 3& 8& {10}& 0& 3& 8& {10}& 3& 8& {10}& 8& {10}& {10}\\ 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 3& 6& 7& 3& 8& {10}& 8& {10}& {10} \end{matrix}} \right)^T}\\ & < \frac{1}{{50}}{\left( {\begin{matrix} {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}& {50}\\ 0& 0& {15}& {35}& {50}& {50}& 0& {15}& {35}& {50}& {50}& {15}& {35}& {50}& {50}& {35}& {50}& {50}& {50}& {50}& {50}\\ 0& 0& 0& 0& 0& 0& 0& {12}& {28}& {40}& {40}& {15}& {35}& {50}& {50}& {35}& {50}& {50}& {50}& {50}& {50} \end{matrix}} \right)^T}, \end{align*}

which implies that its system structure is not as good as that of a multi-state linear consecutive $(2,1)$-out-of-$4$:$G$ wireless sensor system whose equivalent system of size $5$ has been discussed in Example 3.1. This means that a multi-state linear consecutive $(2,1)$-out-of-$4$:$G$ wireless sensor system tends to have better performance than a multi-state linear consecutive $(3,2)$-out-of-$5$:$G$ wireless sensor system with sparse $(0,1)$ if their i.i.d. multi-state sensors have the same lifetime distributions.

Example 4.2. Consider a wireless sensor system with six i.i.d. sensors and a multi-state linear consecutive $(3,2)$-out-of-$6$:$G$ structure with sparse $(0,1)$, as defined in Example 4.1. As in Example 3 of Yi et al. [Reference Samaniego, Balakrishnan and Navarro24], the multi-state survival signature of such a system can be shown to be

\begin{align*} \boldsymbol{S} & = (\boldsymbol{S}_6^{(0)},{\boldsymbol{S}^{(1)}},{\boldsymbol{S}^{(2)}})\\ & = {\left( {\begin{matrix} 1& 1& 1& 1& 1& 1& 1& 0& 0& 1& 1& 1& 0& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1\\[6pt] 0& 0& 0& {\dfrac{1}{5}}& {\dfrac{3}{5}}& 1& 1& 0& 0& {\dfrac{1}{5}}& {\dfrac{3}{5}}& 1& 1& 0& {\dfrac{1}{5}}& {\dfrac{3}{5}}& 1& 1& {\dfrac{1}{5}}& {\dfrac{3}{5}}& 1& 1& {\dfrac{3}{5}}& 1& 1& 1& 1& 1\\[9pt] 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& 0& {\dfrac{1}{5}}& {\dfrac{{37}}{{90}}}& {\dfrac{3}{5}}& {\dfrac{3}{5}}& {\dfrac{1}{5}}& {\dfrac{3}{5}}& 1& 1& {\dfrac{3}{5}}& 1& 1& 1& 1& 1 \end{matrix}} \right)^T}\\ & < {\left( {\begin{matrix} 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1& 1\\[6pt] 0& 0& {\dfrac{1}{5}}& {\dfrac{1}{2}}& {\dfrac{4}{5}}& 1& 1& 0& {\dfrac{1}{5}}& {\dfrac{1}{2}}& {\dfrac{4}{5}}& 1& 1& {\dfrac{1}{5}}& {\dfrac{1}{2}}& {\dfrac{4}{5}}& 1& 1& {\dfrac{1}{2}}& {\dfrac{4}{5}}& 1& 1& {\dfrac{4}{5}}& 1& 1& 1& 1& 1\\[9pt] 0& 0& 0& 0& 0& 0& 0& 0& {\dfrac{2}{{15}}}& {\dfrac{1}{3}}& {\dfrac{8}{{15}}}& {\dfrac{2}{3}}& {\dfrac{2}{3}}& {\dfrac{{14}}{{75}}}& {\dfrac{7}{{15}}}& {\dfrac{{56}}{{75}}}& {\dfrac{{14}}{{15}}}& {\dfrac{{14}}{{15}}}& {\dfrac{1}{2}}& {\dfrac{4}{5}}& 1& 1& {\dfrac{4}{5}}& 1& 1& 1& 1& 1 \end{matrix}} \right)^T}, \end{align*}

which implies that its system structure is not as good as that of a multi-state linear consecutive $(2,1)$-out-of-$4$:$G$ wireless sensor system whose equivalent system of size $6$ has been discussed in Examples 3.2 and 3.3. This means that a multi-state linear consecutive $(2,1)$-out-of-$4$:$G$ wireless sensor system tends to have better performance than a multi-state linear consecutive $(3,2)$-out-of-$6$:$G$ wireless sensor system with sparse $(0,1)$ if their i.i.d. multi-state sensors have the same lifetime distributions.

5. Concluding remarks

In this work, we have first redefined the concept of multi-state survival signature of multi-state coherent or mixed systems with i.i.d. multi-state components. For two multi-state survival signatures of different sizes, transformation formulas have been established that facilitate transforming one with smaller size to one with larger size, which would thus enable stochastic comparison of two corresponding multi-state coherent or mixed systems of different sizes. Specific examples have been presented for illustrating the transformation formulas established here, and also their use in comparing systems of different sizes. Theoretical results derived here could prove useful in system design and management decision making for practical multi-state systems such as telecommunication systems, wireless sensor systems, power systems, radar station and many others. These transformation formulas are derived under the independence of component lifetimes at different state levels. We are currently working on developing similar results under some weaker assumptions and hope to report the corresponding findings in a future paper.

Funding

This research work was supported by the National Natural Science Foundation of China (No. 72001016, No. 71931001 and No. 71722007), the Fundamental Research Funds for the Central Universities (buctrc202102), the Funds for First-class Discipline Construction (XK1802-5) and the Natural Sciences and Engineering Research Council of Canada (for the second author, RGPIN-2020-06733).

Competing interests

The authors declare none.

References

Amini-Seresht, E., Khaledi, B.E., & Kochar, S. (2020). Some new results on stochastic comparisons of coherent systems using signatures. Journal of Applied Probability 57(1): 156173.CrossRefGoogle Scholar
Balakrishnan, N. & Volterman, W. (2014). On the signatures of ordered system lifetimes. Journal of Applied Probability 51(1): 8291.CrossRefGoogle Scholar
Behrensdorf, J., Regenhardt, T.E., Broggi, M., & Beer, M. (2021). Numerically efficient computation of the survival signature for the reliability analysis of large networks. Reliability Engineering and System Safety 216: 107935.CrossRefGoogle Scholar
Burkschat, M. & Navarro, J. (2018). Stochastic comparisons of systems based on sequential order statistics via properties of distorted distributions. Probability in the Engineering and Informational Sciences 32(2): 246274.CrossRefGoogle Scholar
Coolen, F.P.A. & Coolen-Maturi, T. (2012). Generalizing the signature to systems with multiple types of components. In Zamojski, W., Mazurkiewicz, J., Sugier, J., Walkowiak, T., & Kacprzyk, J. (eds), Complex systems and dependability. New York: Springer, pp. 115130.Google Scholar
Coolen-Maturi, T., Coolen, F.P.A., & Balakrishnan, N. (2021). The joint survival signature of coherent systems with shared components. Reliability Engineering and System Safety 207: 107350.CrossRefGoogle Scholar
Cui, L.R., Gao, H.D., & Mo, Y.C. (2018). Reliability for k-out-of-n:F balanced systems with m sectors. IISE Transactions 50(5): 381393.CrossRefGoogle Scholar
Da, G.F. & Hu, T.Z. (2013). On bivariate signatures for systems with independent modules. In Li, H.J. & Li, X.H. (eds), Stochastic orders in reliability and risk. New York: Springer, pp. 143166.CrossRefGoogle Scholar
Ding, W.Y., Fang, R., & Zhao, P. (2021). An approach to comparing coherent systems with ordered components by using survival signatures. IEEE Transactions on Reliability 70(2): 495506.CrossRefGoogle Scholar
Eryilmaz, S. & Tuncel, A. (2016). Generalizing the survival signature to unrepairable homogeneous multi-state systems. Naval Research Logistics 63(8): 593599.CrossRefGoogle Scholar
Gertsbakh, I. & Shpungin, Y. (2012). Multidimensional spectra of multistate systems with binary components. In Lisnianski, A. & Frenkel, I. (eds), Recent advances in system reliability. London: Springer, pp. 4961.CrossRefGoogle Scholar
Goli, S. (2019). On the conditional residual lifetime of coherent systems under double regularly checking. Naval Research Logistics 66(4): 352363.CrossRefGoogle Scholar
Kochar, S., Mukerjee, H., & Samaniego, F.J. (1999). The “signature” of a coherent system and its application to comparisons among systems. Naval Research Logistics 46(5): 507523.3.0.CO;2-D>CrossRefGoogle Scholar
Lindqvist, B.H., Samaniego, F.J., & Huseby, A.B. (2016). On the equivalence of systems of different sizes, with applications to system comparisons. Advances in Applied Probability 48(2): 332348.CrossRefGoogle Scholar
Lindqvist, B.H., Samaniego, F.J., & Wang, N.N. (2021). On the comparison of performance-per-cost for coherent and mixed systems. Probability in the Engineering and Informational Sciences 35(4): 867884.CrossRefGoogle Scholar
Lisnianski, A., Frenkel, I., & Ding, Y. (2010). Multi-state system reliability analysis and optimization for engineers and industrial managers. London: Springer.CrossRefGoogle Scholar
Natvig, B. (2010). Multistate systems: Reliability theory with applications. Hoboken, New Jersey: Wiley.Google Scholar
Navarro, J. & Fernandez-Sanchez, J. (2020). On the extension of signature-based representations for coherent systems with dependent non-exchangeable components. Journal of Applied Probability 57(2): 429440.CrossRefGoogle Scholar
Navarro, J., Ruiz, J.M., & Sandoval, C.J. (2007). Properties of coherent systems with dependent components. Communications in Statistics - Theory and Methods 36: 175191.CrossRefGoogle Scholar
Navarro, J., Samaniego, F.J., & Balakrishnan, N. (2013). Mixture representations for the joint distribution of lifetimes of two coherent systems with shared components. Advances in Applied Probability 45(4): 10111027.CrossRefGoogle Scholar
Navarro, J., Samaniego, F.J., Balakrishnan, N., & Bhattacharya, D. (2008). On the application and extension of system signatures in engineering reliability. Naval Research Logistics 55(4): 313327.CrossRefGoogle Scholar
Samaniego, F.J. (1985). On closure of the IFR class under formation of coherent systems. IEEE Transactions on Reliability R 34(1): 6972.CrossRefGoogle Scholar
Samaniego, F.J. (2007). System signatures and their applications in engineering reliability. New York: Springer.CrossRefGoogle Scholar
Samaniego, F.J., Balakrishnan, N., & Navarro, J. (2009). Dynamic signatures and their use in comparing the reliability of new and used systems. Naval Research Logistics 56(6): 577591.CrossRefGoogle Scholar
Yi, H., Balakrishnan, N., & Cui, L.R. (2020). On the multi-state signatures of ordered system lifetimes. Advances in Applied Probability 52(1): 291318.CrossRefGoogle Scholar
Yi, H., Balakrishnan, N., & Cui, L.R. (2021). Comparisons of multi-state systems with binary components of different sizes. Methodology and Computing in Applied Probability 23(4): 13091321.CrossRefGoogle Scholar
Yi, H., Balakrishnan, N., & Cui, L.R. (2021). Computation of survival signatures for multi-state consecutive-k systems. Reliability Engineering and System Safety 208: 107429.CrossRefGoogle Scholar
Yi, H., Balakrishnan, N., & Cui, L.R. (2022). On dependent multi-state semi-coherent systems based on multi-state joint signature. Methodology and Computing in Applied Probability, to appear, DOI: 10.1007/s11009-021-09877-3Google Scholar
Yi, H., Balakrishnan, N., & Li, X. (2022). Signatures of multi-state systems based on a series/parallel/recurrent structure of modules. Probability in the Engineering and Informational Sciences, to appear, DOI: 10.1017/S0269964821000103Google Scholar
Yi, H., Balakrishnan, N., & Li, X. (2022). Equivalency in joint signatures for binary/multi-state systems of different sizes. Probability in the Engineering and Informational Sciences, to appear, DOI: 10.1017/S0269964821000231Google Scholar
Yi, H. & Cui, L.R. (2018). A new computation method for signature: Markov process method. Naval Research Logistics 65(5): 410426.CrossRefGoogle Scholar
Zarezadeh, S., Asadi, M., & Eftekhar, S. (2019). Signature-based information measures of multi-state networks. Probability in the Engineering and Informational Sciences 33(3): 438459.CrossRefGoogle Scholar
You have Access