Hostname: page-component-7bb8b95d7b-l4ctd Total loading time: 0 Render date: 2024-10-06T02:27:18.426Z Has data issue: false hasContentIssue false

Ordering and ageing properties of developed sequential order statistics governed by the Archimedean copula

Published online by Cambridge University Press:  26 June 2023

Tanmay Sahoo*
Affiliation:
Indian Institute of Technology Jodhpur
Nil Kamal Hazra*
Affiliation:
Indian Institute of Technology Jodhpur
*
*Postal address: Department of Mathematics, Indian Institute of Technology Jodhpur, Karwar 342037, India.
*Postal address: Department of Mathematics, Indian Institute of Technology Jodhpur, Karwar 342037, India.
Rights & Permissions [Opens in a new window]

Abstract

Developed sequential order statistics (DSOS) are very useful in modeling the lifetimes of systems with dependent components, where the failure of one component affects the performance of remaining surviving components. We study some stochastic comparison results for DSOS in both one-sample and two-sample scenarios. Furthermore, we study various ageing properties of DSOS. We state many useful results for generalized order statistics as well as ordinary order statistics with dependent random variables. At the end, some numerical examples are given to illustrate the proposed results.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Order statistics play a significant role in probability, statistics, finance, economics, reliability theory, and many other fields. In reliability theory, they have a one-to-one relationship with the lifetimes of k-out-of-n systems. A system of n components is said to be a k-out-of-n system if it functions as long as at least k of its n components function. If $X_1,X_2,\dots,X_n$ represent the lifetimes of the n components of a k-out-of-n system, then the system lifetime is represented by the $(n-k+1)$ th order statistic, namely, $X_{n-k+1:n}$ . Two special cases of k-out-of-n systems are the parallel system ( $k=1$ ) and the series system ( $k=n$ ). There are many real-life systems that are structurally the same as k-out-of-n systems (see [Reference Barlow and Proschan4, Reference Navarro32, Reference Samaniego35]).

In conventional modeling of the lifetimes of k-out-of-n systems, it is generally assumed that the failure of one component does not have any impact on the lifetimes of the remaining surviving components. However, in most cases, this assumption oversimplifies any given real-life scenario. For example, the load of an aircraft engine, when it fails, is transferred to the remaining surviving engines, and consequently the lifetimes of the remaining engines decrease. To model such phenomena, we need more generalized models that can capture the impact of the failure of one component on the others. To deal with this problem, Kamps [Reference Kamps25] introduced the notion of sequential order statistics (SOS) (see the definition in [Reference Kamps25]), which is an extension of that of ordinary order statistics (OS). Subsequently, Cramer and Kamps [Reference Cramer and Kamps17] introduced sequential k-out-of-n systems as an extension of the usual k-out-of-n systems. As before, the lifetime of a sequential k-out-of-n system is the same as the $(n-k+1)$ th-order sequential order statistic of the lifetimes of the components of the system. In a sequential k-out-of-n system, when a component fails, the distributions of the residual lifetimes of the remaining components are assumed to be different from the distributions that they had previously. This distributional change can be viewed as failure-related damage or an increment of pressure imposed on the surviving components. Numerous papers have been written on this topic; see, e.g., [Reference Balakrishnan, Beutner and Kamps1, Reference Burkschat11, Reference Burkschat, Kamps and Kateri12, Reference Burkschat and Navarro13, Reference Cramer and Kamps17, Reference Cramer, Kamps, Balakrishnan, Rao and North-Holland18, Reference Cramer and Kamps19, Reference Kamps25] and the references therein.

Sequential order statistics (or equivalently, sequential k-out-of-n systems) are defined based on the assumption that the remaining components in each step (i.e., after each failure) are independent. However, most real-life systems, given their complex structures, consist of components whose lifetimes are not necessarily independent. Below we discuss two examples.

Example 1.1. Assume that the manager of an oil transmission pipeline intends to build a new station with five pumps to raise the oil pressure throughout the pipeline. If three out of the five pumps are operational, then the station functions effectively. Here, the lifetimes of the five pumps are indeed dependent. Again, the failure of a pump increases the load on the remaining pumps, because proper transmission requires a certain level of oil pressure (i.e., there is a load-sharing effect). This is an example of a sequential 3-out-of-5 system with dependent component lifetimes (see [Reference Baratnia and Doostparast3]).

Example 1.2. Consider a four-engine jet aircraft that functions as long as at least two of its engines function. Here, the lifetimes of the four engines are interdependent. Moreover, when an engine fails, the load on the remaining engines increases to provide sufficient power to comfortably reach a diversion airport or continue the journey. This system can be viewed as a sequential 2-out-of-4 system with dependent component lifetimes.

Given the interdependency structure between components of a system, the SOS model may not be appropriate to describe these scenarios. Recently, Baratnia and Doostparast [Reference Baratnia and Doostparast3] have introduced an extended SOS model, known as developed sequential order statistics (DSOS), to describe the lifetimes of systems with dependent components. The definition of DSOS can be found in [Reference Baratnia and Doostparast3].

The study of the ordering and ageing properties of order statistics is one of the important problems in reliability theory. A large volume of research on various aspects of the ordering and ageing properties of ordinary order statistics can be found in the literature (see [Reference Balakrishnan and Zhao2, Reference Barlow and Proschan4, Reference Belzunce, Franco, Ruiz and Ruiz6, Reference Franco, Ruiz and Ruiz21, Reference Hazra, Kuiti, Finkelstein and Nanda22, Reference Hazra, Kuiti, Finkelstein and Nanda23, Reference Li and Fang28], to name a few). Furthermore, various ordering properties of generalized order statistics (see the definition in [Reference Kamps and Cramer26]) have been studied by [Reference Belzunce, Mercader and Ruiz7, Reference Belzunce, Mercader and Ruiz8, Reference Franco, Ruiz and Ruiz20, Reference Hu and Zhuang24] and many others. One may note that, if the underlying distribution functions follow the proportional hazard rate model, then generalized order statistics and sequential order statistics are the same. However, in general, sequential order statistics and generalized order statistics are conceptually different. The ordering properties of sequential order statistics have been considered in [Reference Burkschat and Navarro14, Reference Burkschat and Navarro15, Reference Burkschat and Torrado16, Reference Navarro and Burkschat31, Reference Torrado, Lillo and Wiper39, Reference Zhuang and Hu40] and the references therein. Furthermore, Burkschat and Navarro [Reference Burkschat and Navarro13] studied closure properties of different ageing classes under the formation of sequential k-out-of-n systems. Later, Barmalzan [Reference Barmalzan, Haidari and Balakrishnan5] studied various ordering and ageing properties of residual lifetimes of live components in sequential k-out-of-n systems. One may note that all of the aforementioned studies were carried out for sequential k-out-of-n systems with independent components (or equivalently, sequential order statistics with independent random variables). The ordering and ageing properties of ordinary order statistics with dependent random variables, governed by the Archimedean copula, were considered in [Reference Li and Fang28, Reference Sahoo and Hazra37] and the references therein. To the best of our knowledge, no work in this direction has been done for sequential order statistics with dependent random variables (i.e., for DSOS). Thus, in this paper, our goal is to study the ordering and ageing properties of DSOS with the dependency structure modeled by the Archimedean copula. It is worth mentioning that the proposed study on DSOS generalizes many well established results available for sequential order statistics, generalized order statistics and ordinary order statistics. The novelty of this paper is mainly in considering the DSOS, which subsumes all of the special cases previously considered in the literature.

The rest of the paper is organized as follows. In Section 2, we discuss preliminaries and some useful lemmas. In Section 3, we discuss the main results of this paper. We study some stochastic comparison results for DSOS. Furthermore, we discuss closure properties of different ageing classes for DSOS. In Section 4, we give some numerical examples to demonstrate the sufficient conditions used in our theorems. Finally, concluding remarks are given in Section 5. All proofs of theorems, propositions, lemmas, and corollaries, wherever given, are deferred to the appendix.

2. Preliminaries and useful lemmas

Unless otherwise stated, we use the following notation throughout the paper. For an absolutely continuous random variable X, we denote the probability density function, the cumulative distribution function, the quantile function, the survival function, the hazard function, the reversed hazard function, the mean residual function, the cumulative hazard rate function, and the cumulative reversed hazard rate function by $f_X$ , $F_X$ , $F^{-1}_X$ , $\bar F_X$ , $r_X$ , $\tilde r_X$ , $m_X$ , ${\Delta}_X$ , and $\tilde \Delta_X$ , respectively; here $r_X\equiv f_X/\bar F_X$ , $\tilde r_X\equiv f_X/ F_X$ , $\Delta_X\equiv -\!\ln \bar F_X$ , $\tilde \Delta_X\equiv -\!\ln F_X$ , and $m_X(t)={\int\limits_t^\infty \bar F_X(w)dw}/{\bar F_X(t)}, \text{ for } t\geq 0.$

We use the following acronyms throughout the paper. We write ‘DID’, ‘OS’, ‘SOS’, and ‘DSOS’ for the phrases ‘dependent and identically distributed’, ‘ordinary order statistic(s)’, ‘sequential order statistic(s)’, and ‘developed sequential order statistic(s)’, respectively. By ‘ $\stackrel{d}=$ ’ we mean equality in distribution. All random variables considered in this paper are assumed to be absolutely continuous with strictly increasing cumulative distribution functions.

Copulas are a very effective tool for describing the dependency structure between random variables. In the literature, a large variety of copulas have been introduced to describe different dependency structures. Some of the best-known copulas are the Farlie–Gumbel–Morgenstern copula, the extreme-value copula, the family of Archimedean copulas, and the Clayton–Oakes copula. Among all of these, the family of Archimedean copulas has received the most attention from the researchers because of its mathematical tractability and its ability to describe a wide range of dependency structures. For an encyclopedic treatment of this topic, one may refer to Nelsen [Reference Nelsen33]. Below we give the definition of an Archimedean copula (see [Reference McNeil and Néslehová30]).

Definition 2.1. Let $\phi\;:\; [0, +\infty]\longrightarrow [0,1]$ be a decreasing continuous function such that $\phi(0)=1$ and $\phi(+\infty)=0$ , and let $\psi\equiv\phi^{-1}$ be the pseudo-inverse of $\phi$ . Then

(1) \begin{eqnarray} C(u_1,u_2,\dots,u_n)=\phi\!\left(\psi(u_1)+\psi(u_2)+\dots +\psi(u_n)\right),\quad \text{for }(u_1,u_2,\dots,u_n)\in [0,1]^n, \;\;\;\;\; \end{eqnarray}

is called the Archimedean copula with generator $\phi$ if $(\!-\!1)^k\phi^{(k)}(x)\geq 0$ , for $k=0,1,\dots,n-2$ , and $(\!-\!1)^{n-2}\phi^{(n-2)}(x)$ is decreasing and convex in $x\geq 0$ . $\Box$

As an extension of SOS, Baratnia and Doostparast [Reference Baratnia and Doostparast3] introduced the notion of DSOS, which is useful for describing the lifetimes of systems with dependent components. Below we describe the notion of DSOS governed by the Archimedean copula (see [Reference Baratnia and Doostparast3, Reference Navarro and Burkschat31]).

Let $F_1,F_2, \dots ,F_n$ be n absolutely continuous cumulative distribution functions with $F_1^{-1} (1)\leq \dots \leq F_n^{-1}(1)$ . Consider a system of n components installed at time $t = 0$ . Assume that all components of the system are functioning at the time of inception. Let $X^{(1)}_1, X^{(1)}_2, \dots , X^{(1)}_n$ be n DID random variables, with the distribution function $F _1$ , representing the lifetimes of the n components. Assume that the dependency structure between these random variables is described by the Archimedean copula with generator $\phi$ . Then the first component failure time is given by

\begin{align*} X^{\star}_{1:n} = \min\!\Big\{X_1^{(1)},X_2^{(1)}, \dots, X_n^{(1)}\Big\}.\end{align*}

Given $X^{\star}_{1:n} = t_1$ , the residual lifetimes of the $(n - 1)$ remaining components are equal in distribution to the residual lifetimes of $(n-1)$ DID components with age $t_1$ and with the distribution function $F_2 $ (instead of $F_1$ ) with the same dependency structure; here $F_2$ is assumed in place of $F_1$ because the failure of the first component has an impact on the performance of the other components. Let the lifetimes of these DID components be represented by $X^{(2)}_1, X^{(2)}_2, \dots , X^{(2)}_{n-1}$ . Then $X^{(2)}_j \sim F_2(\cdot | t_1)$ , where $\bar{F}_2(x|t_1) = {\bar{F}_2(x)}/{\bar{F}_2(t_1)}$ for $x \geq t_1$ . Moreover, $X^{(2)}_j \geq t_1$ , for $j = 1,2, \dots ,n - 1$ . Furthermore, the second component failure time is given by

\begin{align*} X^{\star}_{2:n} = \min\!\Big\{X_1^{(2)},X_2^{(2)}, \dots, X_{n-1}^{(2)}\Big\}.\end{align*}

Proceeding in this manner, we assume that the ith failure occurs at time $t_i \;(>t_{i-1})$ , i.e., $X^{\star}_{i:n} = t_i$ . Then the residual lifetimes of the $(n - i)$ remaining components are equal in distribution to the residual lifetimes of $(n-i)$ DID components with age $t_i$ and the distribution function $F_{i+1}$ with the same dependency structure. Let the lifetimes of these DID components be represented by $X^{(i+1)}_1, X^{(i+1)}_2, \dots , X^{(i+1)}_{n-i}$ . Then $X^{(i+1)}_j \sim F_{i+1}(\cdot | t_i)$ , where $\bar{F}_{i+1}(x|t_i) = {\bar{F}_{i+1}(x)}/{\bar{F}_{i+1}(t_i)}$ for $x \geq t_i$ . Moreover, note that $X^{(i+1)}_j \geq t_i$ , for $j = 1,2, \dots ,n - i$ . The $(i+1)$ th component failure time is then given by

\begin{align*} X^{\star}_{i+1:n} = \min\!\Big\{X_1^{(i+1)},X_2^{(i+1)}, \dots, X_{n-i}^{(i+1)}\Big\}.\end{align*}

Finally, if the $(n - 1)$ th component failure occurs at time $t_{n-1} = X^{\star}_{n-1:n}$ , then the last component failure time is given by $X^{\star}_{n:n}$ with the reliability function $\bar{F}_{n}(x|t_{n-1}) = {\bar{F}_{n}(x)}/{\bar{F}_{n}(t_{n-1})}$ for $x \geq t_{n-1}$ . The random variables $X^{\star}_{1:n} \leq X^{\star}_{2:n} \leq \dots \leq X^{\star}_{n:n} $ are called the developed sequential order statistics (DSOS) based on $F_1, F_2, \dots, F_n$ , where the dependency structure is described by the Archimedean copula with generator $\phi$ . For brevity, we denote them by $\big(X^{\star}_{1:n}, X^{\star}_{2:n}, \dots,X^{\star}_{n:n} \big)\sim$ DSOS( $ F_1 , F_2 , \dots , F_n;\;\phi $ ).

Remark 2.1. If $F_1\stackrel{d}=F_2\stackrel{d}=\dots\stackrel{d}=F_n\stackrel{d}=F$ (say), then the DSOS, $\big(X^{\star}_{1:n}, X^{\star}_{2:n}, $ $\dots,X^{\star}_{n:n} \big)\sim$ DSOS( $ F_1 , F_2 , \dots , F_n;\;\phi $ ), reduce to OS of DID random variables with the common distribution function F and the dependency structure described by the Archimedean copula with generator $\phi$ . We denote these OS by $(X_{1:n}, X_{2:n}, $ $\dots,X_{n:n} )\sim$ OS( $ F;\;\phi $ ).

Remark 2.2. One may note that if $(X^{\star}_{1:n}, X^{\star}_{2:n}, \dots,X^{\star}_{n:n} )\sim$ DSOS( $ F_1 , F_2 , \dots , F_n;\;\phi $ ), then $\{X^{\star}_{1:n}, X^{\star}_{2:n},\dots ,X^{\star}_{n:n}\}$ forms a Markov chain with transition probabilities given by

(2) \begin{eqnarray}P\big(X^{\star}_{r:n} > t | X^{\star}_{r-1:n} =x\big) = \phi\bigg(\!(n-r+1) \psi \bigg( \frac{\bar{F}_r(t)}{\bar{F}_r(x)}\bigg)\!\bigg), \quad t\geq x>0,\end{eqnarray}

where $\bar{F} (x) >0$ and $\psi\equiv\phi^{-1}$ . $\Box$

Below we give an alternative definition of DSOS (see [Reference Cramer and Kamps19]).

Definition 2.2. Let $F_1, \dots, F_n$ be cumulative distribution functions with $F_1^{-1} (1)\leq \dots \leq F_n^{-1}(1)$ , and let

\begin{align*} \Big(Y_j^{(r)}\Big)_{1 \leq r \leq n, 1 \leq j \leq n-r+1}\end{align*}

be dependent random variables with $Y_j^{(r)} \sim F_r$ , $r= 1, 2, \dots, n$ , $j=1, 2, \dots, n-r+1$ , where the dependency structures are described by the same Archimedean copula with generator $\phi$ . Let $X_j^{(1)} = Y_j^{(1)}$ , $j=1,2, \dots, n$ , and $X^{\star}_{1:n} =\min\!\Big\{X_1^{(1)},X_2^{(1)}, \dots, X_n^{(1)}\Big\}.$ For $r=2, 3, \dots, n$ , let

\begin{align*} X_j^{(r)} = F_r^{-1} \Big\{F_r\Big(Y_j^{(r)}\Big)\big[1- F_r\big(X^{\star}_{r-1: n}\big)\big]+ F_r\big(X^{\star}_{r-1: n}\big)\Big\}, \quad j= 1, 2, \dots, n-r+1,\end{align*}

and $X^{\star}_{r:n} = \min\!\Big\{X_1^{(r)},X_2^{(r)}, \dots, X_{n-r+1}^{(r)}\Big\}$ . Then

\begin{align*} \big(X^{\star}_{1:n}, X^{\star}_{2:n}, \dots,X^{\star}_{n:n} \big)\sim \text{DSOS}( F_1 , F_2 , \dots , F_n;\;\phi ).\end{align*}

Two equivalent representations of DSOS are given in the following two lemmas. The proofs of these lemmas can be carried out along the same lines as in [Reference Burkschat and Navarro13, Reference Cramer and Kamps19] and are therefore omitted.

Lemma 2.1. Let $\big(X^{\star}_{1:n}, X^{\star}_{2:n}, \dots,X^{\star}_{n:n} \big)\sim$ DSOS( $ F_1 , F_2 , \dots , F_n;\;\phi $ ). Then

\begin{eqnarray*} &&X_{1:n}^{\star} = \bar{F}_1^{-1} \Big(V^{(1)}\Big), \\[5pt] && X_{i:n}^{\star} = \bar{F}_i^{-1}\Big(V^{(i)} \bar{F}_i\big(X_{i-1:n}^{\star}\big)\Big), \quad\text{ for } i= 2, 3, \dots, n, \end{eqnarray*}

where

\begin{align*} V^{(i)}= \max \Big\{\Big(1-U^{(i)}_1\Big), \dots , \Big(1-U^{(i)}_{n-i+1}\Big)\Big\}\end{align*}

and $U_j^{(i)} \sim Unif(0,1)$ , for $i=1,2 ,\dots, n$ and $j=1, 2, \dots, n-i+1$ , and, for each $i\in\{1,2,\dots, n\}$ , the $U_j^{(i)}$ are dependent random variables governed by the Archimedean copula with generator $\phi$ .

Lemma 2.2. Let $\big(X^{\star}_{1:n}, X^{\star}_{2:n}, \dots,X^{\star}_{n:n} \big)\sim$ DSOS( $ F_1 , F_2 , \dots , F_n;\;\phi $ ). Furthermore, let $D_i\equiv -\!\ln\bar{F}_i$ be the cumulative hazard rate function of $F_i$ , for $i=1,2,\dots,n$ . Then

(3) \begin{eqnarray} X_{1:n}^{\star} = D_1^{-1}\Big(W^{(1)}\Big), \end{eqnarray}
(4) \begin{eqnarray} X_{i:n}^{\star} = D_i^{-1}\Big(W^{(i)} + D_i\big(X_{i-1:n}^{\star}\big)\Big), \quad\text{ for } i=2,3, \dots, n, \end{eqnarray}

where

\begin{align*} W^{(i)} = -\!\ln\!\Big(V^{(i)}\Big)= \min\!\Big\{-\!\ln\Big(1-U^{(i)}_1\Big), \dots , -\!\ln \!\Big(1-U^{(i)}_{n-i+1}\Big)\Big\}, \quad i=1,2,\dots, n,\end{align*}

and the $U_j^{(i)}$ are the same as in Lemma 2.1. Moreover, $\{W^{(j)},\; j=1,2, \dots , n\}$ are independent random variables with

(5) \begin{eqnarray} \bar{F}_{W^{(j)}} (t) = \phi\big( (n-j+1) \psi \big( e^{-t}\big) \big), \quad t>0,\;j=1,2, \dots , n, \end{eqnarray}

where $\psi\equiv\phi^{-1}$ . $\Box$ .

Stochastic orders are widely used to compare two or more random variables/vectors. In the literature, numerous types of stochastic orders have been introduced, e.g., the usual stochastic order, the hazard rate order, etc. (see [Reference Shaked and Shanthikumar36, Reference Sahoo and Hazra37]). Below we give the definitions of several stochastic orders that are used in subsequent sections.

Definition 2.3. Let X and Y be two absolutely continuous random variables with non-negative supports. Then X is said to be smaller than Y in the

  1. (a) usual stochastic order, denoted by $X\leq_{st}Y$ or $F_X \leq_{st} F_Y$ , if $\bar F_X(x)\leq \bar F_Y(x)$ for all $x\in\;[0,\infty);$

  2. (b) hazard rate order, denoted by $X\leq_{hr}Y$ or $F_X \leq_{hr} F_Y$ , if ${\bar F_Y(x)}/{\bar F_X(x)}$ is increasing in $x \in [0,\infty);$

  3. (c) reversed hazard rate order, denoted by $X\leq_{rh}Y$ or $F_X \leq_{rh} F_Y$ , if $ {F_Y(x)}/{ F_X(x)}$ is increasing in $x\in [0,\infty);$

  4. (d) likelihood ratio order, denoted by $X\leq_{lr}Y$ or $F_X \leq_{lr} F_Y$ , if ${f_Y(x)}/{f_X(x)}$ is increasing in $x\in(0,\infty);$

  5. (e) mean residual life order, denoted by $X\leq_{ mrl} Y$ or $F_X \leq_{mrl} F_Y$ , if $\int_{x}^{\infty}\bar{F}_Y(u)du/\int_{x}^{\infty}\bar{F}_X(u)du$ is increasing in x over $\{x\;:\;\int_{x}^{\infty}\bar{F}_X(u)du >0\};$

  6. (f) ageing-faster order in terms of the hazard rate, denoted by $X \leq_{c} Y$ or $F_X \leq_{c} F_Y$ , if $ \Delta_X \circ \Delta_Y^{-1} $ is convex on $[0,\infty)$ , or equivalently, $r_X /r_Y$ is increasing on $ [0, \infty)$ ;

  7. (g) ageing faster in average order in terms of the cumulative hazard rate, denoted by $X \leq_{\ast} Y$ or $F_X \leq_{\ast} F_Y$ , if $ \Delta_X \circ \Delta_Y^{-1} $ is star-shaped on $[0,\infty)$ , or equivalently, $\Delta_X /\Delta_Y$ is increasing on $ [0, \infty)$ ;

  8. (h) ageing faster in quantile order in terms of the cumulative hazard rate, denoted by $X \leq_{su} Y$ or $F_X \leq_{su} F_Y$ , if $ \Delta_X \circ \Delta_Y^{-1} $ is superadditive on $[0,\infty)$ . $\Box$

Like stochastic orders, stochastic ageings are also very useful tools for describing how a system behaves over time. In the literature, numerous ageing classes (e.g., IFR, IFRA, DFR, DLR, etc.) have been introduced to characterize different ageing properties of a system (see [Reference Barlow and Proschan4, Reference Lai and Xie27, Reference Sengupta and Deshpande38] and the references therein). Below we give the definitions of some ageing classes that are used in this paper.

Definition 2.4. Let X be an absolutely continuous random variable with nonnegative support. Then X is said to have the

  1. (a) increasing likelihood ratio (ILR) (resp. decreasing likelihood ratio (DLR)) property if $f'_{\!\!X}(x)/f_X(x)$ is decreasing (resp. increasing) in $x\geq 0$ (here $f'_{\!\!X}(\!\cdot\!)$ represents the first derivative of $f_X(\!\cdot\!)$ );

  2. (b) increasing failure rate (IFR) (resp. decreasing failure rate (DFR)) property if $r_X(x)$ is increasing (resp. decreasing) in $x\geq 0;$

  3. (c) decreasing reversed failure rate (DRFR) property if $\tilde r_X(x)\;\text{is decreasing in}\;x\geq 0;$

  4. (d) increasing failure rate in average (IFRA) (resp. decreasing failure rate in average (DFRA)) property if $ {-\!\ln \bar F_X(x)}/{x}\text{ is increasing (resp. decreasing) in }x\geq 0;$

  5. (e) multivariate increasing failure rate in average (MIFRA) property if $ E\big(\xi\big(X_1, X_2, \dots , X_n\big) \big) \leq E^{{1}/{\alpha}}\big(\xi^{\alpha}\big(X_1/\alpha, X_2/\alpha, \dots , X_n/\alpha \big) \big)$ for all continuous nonnegative increasing functions $\xi$ and for all $\alpha \in (0, 1);$

  6. (f) new better than used (NBU) (resp. new worse than used (NWU)) property if $ \Delta_X$ is superadditive (resp. subadditive) in $x\geq 0$ , or equivalently, $\bar{F}_X(x + t) \leq(\text{resp.\;} \geq)\, \bar{F}_X(x) \bar{F}_X (t)$ for all $x, t \geq 0$ . $ \Box$

Below we give a list of lemmas that are used in the next section. The proofs of Lemmas 2.4, 2.5, and 2.6 are omitted for the sake of brevity.

Lemma 2.3. Let X and Y be independent random variables with nonnegative supports. If $\zeta$ is a strictly increasing, continuous, and superadditive function, then $\zeta (X) + \zeta (Y) \leq_{st} \zeta \!\left(X + Y \right) $ .

Lemma 2.4. Let X be a nonnegative random variable, and let $a \geq 1$ be a constant.

  1. (a) If X is IFR then $X \leq_{hr} aX$ .

  2. (b) If $u\tilde{r}_X(u)$ is decreasing in $u>0$ , then $X \leq_{rh} aX$ .

  3. (c) If $ f_X \!\left( e^u\right)$ is log-concave in $u>0$ , then $X \leq_{lr} aX$ .

Lemma 2.5. Let Z be a nonnegative random variable, and let X and Y be absolutely continuous nonnegative random variables such that $X \; \geq_{c} \; Y.$ If Z is DRFR, then $\Delta_Y \circ \Delta^{-1}_X \!\left(Z\right)$ is DRFR.

Lemma 2.6. Let Z be a nonnegative random variable, and let X be an absolutely continuous nonnegative random variable. If X is DFR and Z is DRFR, then $\Delta^{-1}_X \!\left(Z\right)$ is DRFR. $\Box$

The proportional hazard rate model (PHR) model is one of the commonly used semi-parametric models. This model has many applications in survival analysis, reliability theory, and many other fields (see [Reference Marshall and Olkin29]). A set of random variables $\{Z_1,Z_2,\dots,Z_n\}$ is said to follow the PHR model if, for $i=1,2,\dots,n$ ,

\begin{align*} \bar F_{Z_i}(t)=(\bar F(t))^{\alpha_i}, \text{ for some }\alpha_i>0\;\;\text{and for all }t>0,\end{align*}

where $\bar F$ is the baseline survival function. We denote this PHR model by $F_{Z_i}\sim$ PHR( $ F;\; \alpha_i$ ), for $i=1,2\dots,n$ .

3. Main results

In this section we discuss the main results of this paper. First we give some stochastic comparison results for DSOS. We consider both one-sample and two-sample scenarios. Furthermore, we study some ageing properties of DSOS. In what follows, we introduce some notation.

Let $\big(X^{\star}_{1:n}, X^{\star}_{2:n}, \dots,X^{\star}_{n:n} \big)\sim$ DSOS( $ F_1 , F_2 , \dots , F_n;\;\phi $ ). For $i = 1, 2, \dots , n - 1$ , let

\begin{eqnarray*}Y^{(i+1)}_{1:n-i}= \min\!\Big\{Y^{(i+1)}_1, Y^{(i+1)}_2, \dots , Y^{(i+1)}_{n-i}\Big\},\end{eqnarray*}

where $Y^{(i+1)}_1, Y^{(i+1)}_2, \dots , Y^{(i+1)}_{n-i}$ are DID random variables with the distribution function $ F_{i+1}$ and the dependency structure described by the Archimedean copula with generator $\phi$ . Here, $Y^{(i+1)}_{k}$ is the random variable corresponding to the parent distribution of the kth remaining component at the ith step (i.e., after the ith failure), for $k=1,2,\dots,n-i$ . For the sake of convenience, we call this the kth parent random variable at the ith step. Consequently, $Y^{(i+1)}_{1:n-i}$ represents the minimum order statistic of all parent random variables at the ith step. Intuitively, what this means is as follows. Suppose that all surviving components at the ith step are replaced by a set of new components (i.e., with age zero) with lifetimes having the same distributions as the remaining surviving components have, i.e., $F_{i+1}$ . Then $Y^{(i+1)}_{1:n-i}$ represents the first failure time for this set of new components.

Furthermore, for an Archimedean copula with the generator $\phi$ , we use the following notation:

\begin{eqnarray*}&& H(u)=\frac{u\phi'(u)}{1-\phi(u)},\;\;R(u)=\frac{u\phi'(u)}{\phi (u)}\text{ and }G(u)=\frac{u\phi''(u)}{\phi'(u)},\quad u>0.\end{eqnarray*}

Note that $H(\!\cdot\!)$ , $R(\!\cdot\!)$ and $G(\!\cdot\!)$ are all negative-valued functions, because $\phi(\!\cdot\!)$ is a decreasing and convex function.

3.1. Stochastic comparisons of DSOS in one-sample scenario

In this subsection we study some stochastic comparison results for DSOS.

In the following theorem, we compare two consecutive DSOS with respect to the hazard rate, reverse hazard rate, likelihood ratio, and mean residual life orders. We prove these results under some sufficient conditions that are given in terms of ith-order DSOS and the minimum order statistic of the parent random variables at the ith step. The proof of the second part of the theorem can be carried out along the same lines as that of the first part and is therefore omitted.

Theorem 3.1. Let $\big(X^{\star}_{1:n}, X^{\star}_{2:n}, \dots,X^{\star}_{n:n} \big)\sim$ DSOS( $ F_1 , F_2 , \dots , F_n;\;\phi $ ). For a given $i\in\{1,2,\dots,$ $n-1\}$ , the following results hold true:

  1. (a) Assume that $uR'(u)/R(u)$ is increasing in $u>0$ . If $X^{\star}_{i:n}\; \leq_{hr} \;Y^{(i+1)}_{1:n-i}$ , then $X^{\star}_{i:n} \; \leq_{hr} \; X^{\star}_{i+1:n}$ .

  2. (b) Assume that $uH'(u)/H(u)$ is decreasing in $u>0$ . If $X^{\star}_{i:n} \; \leq_{rh} \; Y^{(i+1)}_{1:n-i}$ , then $X^{\star}_{i:n} \; \leq_{rh} \; X^{\star}_{i+1:n}$ .

  3. (c) Assume that ${G(nu)}/{R(u)} - {G(u)}/{R(u)} $ is positive and increasing in $u>0$ . If $X^{\star}_{i:n} \; \leq_{lr} \; Y^{(i+1)}_{1:n-i}$ , then $X^{\star}_{i:n} \; \leq_{lr} \; X^{\star}_{i+1:n}$ .

  4. (d) Assume that $uR'(u)/R(u)$ is increasing in $u>0$ . If $X^{\star}_{i:n} \; \leq_{mrl} \; Y^{(i+1)}_{1:n-i}$ , then $X^{\star}_{i:n} \; \leq_{mrl} \; X^{\star}_{i+1:n}$ . $\Box$

In the following theorem, we give slightly more generalized results than in the previous theorem. Here, we compare the first $(i+1)$ consecutive DSOS with respect to the hazard rate, reverse hazard rate, and likelihood ratio orders. These results are proved under some sufficient conditions that are given in terms of an ordering relation between distributions at different steps. These sufficient conditions are easier to verify than those in the previous theorem. The proof of the first part of the theorem can be carried out along the same lines as that of the second part and is therefore omitted.

Because of its mathematical complexity, the result given in Theorem 3.2(c) cannot be proved under a general setup. Thus, we state this result for the PHR model. One may note that, when the underlying distributions follow the PHR model, SOS are the same as generalized order statistics.

Theorem 3.2. Let $\big(X^{\star}_{1:n}, X^{\star}_{2:n}, \dots,X^{\star}_{n:n} \big)\sim$ DSOS( $ F_1 , F_2 , \dots , F_n;\;\phi $ ). For a given $i\in\{1,2,\dots,$ $n-1\}$ , the following results hold true:

  1. (a) Assume that $uR'(u)/R(u)$ is increasing in $u>0$ . If $F_1 \leq_{c} F_2 \leq_{c} \dots \leq_{c} F_{i+1}$ , then $X^{\star}_{1:n} \; \leq_{hr} \; X^{\star}_{2:n} \; \leq_{hr} \; \dots \; \leq_{hr} \; X^{\star}_{i+1:n}$ .

  2. (b) Assume that $uH'(u)/H(u)$ is decreasing in $u>0$ . If $F_1 \geq_{c} F_2 \geq_{c} \dots \geq_{c} F_{i+1}$ , then $X^{\star}_{1:n} \; \leq_{rh} \; X^{\star}_{2:n} \; \leq_{rh} \; \dots \; \leq_{rh} \; X^{\star}_{i+1:n}$ .

  3. (c) Let ${F}_j\sim$ PHR( $F;\alpha_j$ ), for $j = 1, 2, \dots , i+1$ . Assume that ${G(nu)}/{R(u)} - {G(u)}/{R(u)} $ is positive and increasing in $u>0$ . Then $X^{\star}_{1:n} \; \leq_{lr} \; X^{\star}_{2:n} \; \leq_{lr} \; \dots \; \leq_{lr} \; X^{\star}_{i+1:n}$ . $\Box$

The corollary below follows immediately from Theorem 3.2 and Remark 2.1. Furthermore, note that the results given in this corollary generalize the results stated in Theorems 3.1(i)–(iii), 3.3, 3.4, 3.7(i)–(iii), 3.8, and 3.9 of Li and Fang [Reference Li and Fang28].

Corollary 3.1. Let $(X_{1:n}, X_{2:n}, \dots,X_{n:n} )\sim$ OS( $ F;\;\phi $ ). Then the following results hold true:

  1. (a) Assume that $uR'(u)/R(u)$ is increasing in $u>0$ . Then $X_{1:n} \; \leq_{hr} \; X_{2:n} \; \leq_{hr} \; \dots \; \leq_{hr} \; X_{n:n}$ .

  2. (b) Assume that $uH'(u)/H(u)$ is decreasing in $u>0$ . Then $X_{1:n} \; \leq_{rh} \; X_{2:n} \; \leq_{rh} \; \dots \; \leq_{rh} \; X_{n:n}$ .

  3. (c) Assume that ${G(nu)}/{R(u)} - {G(u)}/{R(u)} $ is positive and increasing in $u>0$ . Then $X_{1:n} \; \leq_{lr} \; X_{2:n} \; \leq_{lr} \; \dots \; \leq_{lr} \; X_{n:n}$ . $\Box$

Remark 3.1. (Sahoo and Hazra [Reference Sahoo and Hazra37], Remark 3.1(a).) If $uG'(u)/G(u)$ is positive and increasing in $u>0$ , and $G(u)/R(u)$ is increasing in $u>0$ , then $({G(nu)}-{G(u)})/{R(u)}$ is positive and increasing in $u>0$ . $\Box$

In the following theorem, we compare the first-order DSOS with the DSOS of other orders with respect to the hazard rate, reverse hazard rate, and likelihood ratio orders. The results given in this theorem may be obtained from Theorems 3.1 and 3.2. However, the sufficient conditions given in this theorem are different from those given in Theorems 3.1 and 3.2. The proof of this theorem can be carried out along the same lines as that of Theorem 3.1 and is therefore omitted.

Theorem 3.3. Let $\big(X^{\star}_{1:n}, X^{\star}_{2:n}, \dots,X^{\star}_{n:n} \big)\sim$ DSOS( $ F_1 , F_2 , \dots , F_n;\;\phi $ ). For a given $i\in\{1,2,\dots,n\}$ , the following results hold true:

  1. (a) Assume that $uR'(u)/R(u)$ is increasing and positive for all $u>0$ . If $F_1 \; \leq_{hr} \; F_i$ , then $X^{\star}_{1:n} \; \leq_{hr} \; X^{\star}_{i:n}$ .

  2. (b) Assume that $uH'(u)/H(u)$ is decreasing and negative for all $u>0$ . If $F_1 \; \leq_{rh} \; F_i$ , then $X^{\star}_{1:n} \; \leq_{rh} \; X^{\star}_{i:n}$ .

  3. (c) Assume that ${G(nu)}/{R(u)} - {G(u)}/{R(u)} $ is positive and increasing in $u>0$ . If $F_1 \; \leq_{lr} \; F_i$ , then $X^{\star}_{1:n} \; \leq_{lr} \; X^{\star}_{i:n}$ .

3.2. Stochastic comparisons of DSOS in two-sample scenario

In this subsection, we study some stochastic comparison results for DSOS in a two-sample scenario.

In the following theorem, we compare two DSOS, formed from two different samples, with respect to the usual stochastic order.

Theorem 3.4. Let $\big(X^{\star}_{1:n}, X^{\star}_{2:n}, \dots,X^{\star}_{n:n} \big)\sim$ DSOS( $ F_1 , F_2 , \dots , F_n;\;\phi $ ) and $\big(Z^{\star}_{1:n}, Z^{\star}_{2:n}, \dots, Z^{\star}_{n:n} \big)\sim$ DSOS( $ G_1 , G_2 , \dots , G_n;\;\phi $ ). For a given $i\in\{1,2,\dots,n\}$ , suppose that one of the following conditions holds:

  1. (a) $F_j \leq_{st} G_j$ for all $j=1,2,\dots,i$ , and $ F_j \leq_{su} G_j$ for all $j=2,3, \dots, i$ ;

  2. (b) $ F_j \leq_{hr} G_j$ for all $j=1, 2, \dots, i$ .

Then $X^{\star}_{k:n} \; \leq_{st} \; Z^{\star}_{k:n}$ for all $k=1, 2, \dots, i$ . $\Box$

The corollary below follows from Theorem 3.2 and Remark 2.1. Note that the result given here is also mentioned in Sahoo and Hazra [Reference Sahoo and Hazra37]. However, the set of sufficient conditions used in the latter paper is different from the one that is given here.

Corollary 3.2. Let $\big(X_{1:n}, X_{2:n}, \dots,X_{n:n} \big)\sim$ OS( $ F;\;\phi $ ) and $\big(Z_{1:n}, Z_{2:n}, \dots,Z_{n:n} \big)\sim$ OS( $ G;\;\phi $ ). If $F \leq_{st} G$ and $ F \leq_{su} G$ , or $F \leq_{hr} G$ holds, then $X_{k:n} \; \leq_{st} \; Z_{k:n}$ for all $k=1, 2, \dots, n$ . $\Box$

In the following theorem, we prove the same result as in Theorem 3.2 for the hazard rate, reversed hazard rate, and likelihood ratio orders. The second part of this theorem can be proved along the same lines as the first part, so its proof is omitted.

Theorem 3.5. Let $\big(X^{\star}_{1:n}, X^{\star}_{2:n}, \dots,X^{\star}_{n:n} \big)\sim$ DSOS( $ F_1 , F_2 , \dots , F_n;\;\phi $ ) and $\big(Z^{\star}_{1:n}, Z^{\star}_{2:n}, \dots, Z^{\star}_{n:n} \big)\sim$ DSOS( $ G_1 , G_2 , \dots , G_n;\;\phi $ ). Furthermore, let ${F}_j\sim$ PHR( $ F;\;\alpha_j$ ) and ${G}_j\sim$ PHR( $ F;\;\beta_j$ ), for $j = 1, 2, \dots , n$ . For a given $i\in\{1,2,\dots,n\}$ , the following results hold true:

  1. (a) Assume that $uR'(u)/R(u)$ is increasing in $u>0$ . If $ \alpha_j \geq \beta_j$ for all $j=1, 2, \dots, i$ , then $X^{\star}_{k:n} \; \leq_{hr} \; Z^{\star}_{k:n}$ for all $k=1, 2, \dots, i$ .

  2. (b) Assume that $uH'(u)/H(u)$ is decreasing in $u>0$ . If $ \alpha_j \geq \beta_j$ for all $j=1, 2, \dots, i$ , then $X^{\star}_{k:n} \; \leq_{rh} \; Z^{\star}_{k:n}$ for all $k=1, 2, \dots, i$ .

  3. (c) Assume that ${G(nu)}/{R(u)} - {G(u)}/{R(u)} $ is positive and increasing in $u>0$ . If $ \alpha_j \geq \beta_j$ for all $j=1, 2, \dots, i$ , then $X_{k:n} \; \leq_{lr} \; Z_{k:n}$ for all $k=1, 2, \dots, i$ . $\Box$

The corollary below follows immediately from Theorem 3.5 and Remark 2.1.

Corollary 3.3. Let $\big(X_{1:n}, X_{2:n}, \dots,X_{n:n} \big)\sim$ OS( $ F;\;\phi $ ) and $\big(Z_{1:n}, Z_{2:n}, \dots,Z_{n:n} \big)\sim$ OS( $ G;\;\phi $ ). Furthermore, let ${F}\sim$ PHR( $Q;\;\alpha$ ) and ${G}\sim$ PHR( $ Q;\;\beta$ ). Then the following results hold true:

  1. (a) Assume that $uR'(u)/R(u)$ is increasing in $u>0$ . If $ \alpha \geq \beta$ , then $X_{k:n} \; \leq_{hr} \; Z_{k:n}$ for all $k=1, 2, \dots, n$ .

  2. (b) Assume that $uH'(u)/H(u)$ is decreasing in $u>0$ . If $ \alpha \geq \beta$ , then $X_{k:n} \; \leq_{rh} \; Z_{k:n}$ for all $k=1, 2, \dots, n$ .

  3. (c) Assume that ${G(nu)}/{R(u)} - {G(u)}/{R(u)} $ is positive and increasing in $u>0$ . If $ \alpha \geq \beta$ , then $X_{k:n} \; \leq_{lr} \; Z_{k:n}$ for all $k=1, 2, \dots, n$ .

3.3. Ageing properties of DSOS

In this subsection, we study some ageing properties of DSOS.

In the following theorem, we provide various sets of sufficient conditions to show that the IFR, the DRFR, the IFRA, and the NBU classes are preserved under the formation of $(n-k+1)$ -out-of-n systems with lifetimes described by DSOS. The proofs of the first, third, and fourth parts of this theorem are similar to that of the second part and are therefore omitted.

Theorem 3.6. Let $\big(X^{\star}_{1:n}, X^{\star}_{2:n}, \dots,X^{\star}_{n:n} \big)\sim$ DSOS( $ F_1 , F_2 , \dots , F_n;\;\phi $ ). For a given $i\in\{1,2,\dots,n\}$ , the following results hold true:

  1. (a) Assume that $uR'(u)/R(u)$ is increasing in $u>0$ . If $F_1 \leq_{c} F_2 \leq_{c} \dots \leq_{c} F_i$ and $F_i$ is IFR, then $X^{\star}_{k:n}$ is IFR for all $k=1,2, \dots, i$ .

  2. (b) Assume that $uH'(u)/H(u)$ is decreasing in $u>0$ . If $F_1 \geq_{c} F_2 \geq_{c} \dots \geq_{c} F_i$ and $F_i$ is DFR, then $X^{\star}_{k:n}$ is DRFR for all $k=1,2, \dots, i$ .

  3. (c) Assume that $uR'(u)/R(u)$ is increasing in $u>0$ . If $F_1 \leq_{{\ast}} F_2 \leq_{*} \dots \leq_{*} F_i$ and $F_i$ is IFRA, then $X^{\star}_{k:n}$ is IFRA for all $k=1,2, \dots, i$ .

  4. (d) Assume that $uR'(u)/R(u)$ is increasing in $u>0$ . If $F_1 \leq_{su} F_2 \leq_{su} \dots \leq_{su} F_i$ and $F_i$ is NBU, then $X^{\star}_{k:n}$ is NBU for all $k=1,2, \dots, i$ . $\Box$

The corollary below follows immediately from Theorems 3.6 and Remark 2.1. Some special cases of this corollary are mentioned in Sahoo and Hazra [Reference Sahoo and Hazra37].

Corollary 3.4. Let $\big(X_{1:n}, X_{2:n}, \dots,X_{n:n} \big)\sim$ OS( $ F;\;\phi $ ). Then the following results hold true:

  1. (a) Assume that $uR'(u)/R(u)$ is increasing in $u>0$ . If F is IFR (resp. IFRA, NBU), then $X_{k:n}$ is IFR (resp. IFRA, NBU) for all $k=1,2,\dots,n$ .

  2. (b) Assume that $uH'(u)/H(u)$ is decreasing in $u>0$ . If F is DFR, then $X_{k:n}$ is DRFR for all $k=1,2,\dots,n$ . $\Box$

In the following theorem, we study the MIFRA property of DSOS. The proof of the first part is omitted for the sake of brevity.

Theorem 3.7. Let $\big(X^{\star}_{1:n}, X^{\star}_{2:n}, \dots,X^{\star}_{n:n} \big)\sim$ DSOS( $ F_1 , F_2 , \dots , F_n;\;\phi $ ). Assume that $uR'(u)/ R(u)$ is increasing in $u>0$ . For a given $i\in\{1,2,\dots,n\}$ , the following results hold true:

  1. (a) If $F_1 \leq_{*} F_2 \leq_{*} \dots \leq_{*} F_i$ and $F_i$ is IFRA, then $\!\left(X^{\star}_{1:n}, X^{\star}_{2:n}, \dots, X^{\star}_{i:n} \right)$ is MIFRA.

  2. (b) If $F_1$ is IFRA and $F_2, \dots ,F_i$ are IFR, then $\!\left(X^{\star}_{1:n}, X^{\star}_{2:n}, \dots X^{\star}_{i:n}\right)$ is MIFRA and $X^{\star}_{i:n}$ is IFRA. $\Box$

The corollary below follows immediately from Theorem 3.7 and Remark 2.1.

Corollary 3.5. Let $\big(X_{1:n}, X_{2:n}, \dots,X_{n:n} \big)\sim$ OS( $ F;\;\phi $ ). Assume that $uR'(u)/R(u)$ is increasing in $u>0$ . If F is IFRA, then $\big( X_{1:n}, X_{2:n}, $ $\dots, X_{i:n} \big) $ is MIFRA for all $i=1,2,\dots,n$ . $\Box$

In the following theorem, we prove the same result as in Theorem 3.6(d) under a different set of sufficient conditions.

Theorem 3.8. Let $\big(X^{\star}_{1:n}, X^{\star}_{2:n}, \dots,X^{\star}_{n:n} \big)\sim$ DSOS( $ F_1 , F_2 , \dots , F_n;\;\phi $ ). Furthermore, let $i\in\{1,2,\dots,n\}$ . Assume that $uR'(u)/R(u)$ is increasing in $u>0$ . If $F_1$ is NBU and $ur_j(u)$ is superadditive for $u>0$ for all $j=1,2, \dots, i$ , then $X^{\star}_{k:n}$ is NBU for all $k=1,2,\dots,i$ ; here $r_j$ is the hazard rate function of $F_j$ . $\Box$

Remark 3.2. It should be noted that the condition ‘ $ur_j(u)$ is superadditive for $u>0$ ’ is satisfied by many well-known distributions (see [Reference Hazra, Kuiti, Finkelstein and Nanda23]).

4. Examples

In this section, we discuss some examples to demonstrate the sufficient conditions given in the theorems of the previous section. Note that these sufficient conditions are satisfied by many popular Archimedean copulas (with specific choices of parameters), including the Clayton copula

\begin{align*} C(u_1,u_2,\dots,u_n) =\Bigg(\prod\limits_{i=1}^{n} u_i^{-\theta} - n + 1\Bigg)^{-1/\theta}\end{align*}

with the generator $\phi(t) = (\theta t + 1)^{-1/\theta}$ , for $\theta \geq 0$ , the Ali–Mikhail–Haq (AMH) copula

\begin{align*} C(u_1,u_2,\dots,u_n) =\Bigg((1 - \theta)\prod\limits_{i=1}^{n} u_i\Bigg)/\Bigg(\prod\limits_{i=1}^{n}(1- \theta +\theta u_i) -\theta \prod\limits_{i=1}^{n} u_i\Bigg)\end{align*}

with the generator $\phi(t) = (1 - \theta)/(e^t-\theta)$ , for $\theta \in [0, 1)$ , and the Gumbel–Hougaard copula

\begin{align*} C(u_1,u_2,\dots,u_n)=\exp\Bigg(-\Bigg[\sum\limits_{i=1}^n(-\!\ln u_i)^\theta\Bigg]^{1/\theta}\Bigg)\end{align*}

with the generator $\phi(t)=\exp\!\left(-t^{1/\theta}\right)$ , for $\theta \in[1,\infty)$ , among others. For the sake of completeness, below we give six examples. More examples can be found in [Reference Sahoo and Hazra37].

The first four examples illustrate the condition given in Parts (a) and (d) of Theorem 3.1, Theorem 3.2(a), Theorem 3.3(a), Theorem 3.5(a), Parts (a), (c), and (d) of Theorem 3.6, Theorem 3.7, and Theorem 3.8. The first two examples are borrowed from [Reference Sahoo and Hazra37].

Example 4.1. Consider the Archimedean copula with generator

\begin{eqnarray*} \phi(u) = e^{-u^{\frac{1}{\delta_1}}},\quad\delta_1 \in\!\left[1,\infty\right), \; u>0,\end{eqnarray*}

which gives

\begin{eqnarray*} \frac{uR'(u)}{R(u)} = \frac{1}{\delta_1}, \quad u>0.\end{eqnarray*}

It is trivially true that ${uR'(u)}/{R(u)}$ is positive and increasing in $u>0$ . Thus, the required condition is satisfied.

Example 4.2. Consider the Archimedean copula with generator

\begin{eqnarray*} \phi(u) = e^{1-(1+u)^{\frac{1}{\delta_2}}},\quad \delta_{2} \in\!\left(0,\infty\right), \; u>0.\end{eqnarray*}

From this, we have

\begin{eqnarray*} R(u) = -\frac{1}{\delta_2}u(1+u)^{\frac{1}{\delta_2}-1},\quad u>0,\end{eqnarray*}

and

\begin{eqnarray*} \frac{uR'(u)}{R(u)} = 1+\bigg(\frac{1}{\delta_2}-1\bigg)\frac{u}{u+1}, \quad u>0.\end{eqnarray*}

Let us fix $0 < \delta_{2} \leq 1$ . It can easily be shown that ${uR'(u)}/{R(u)}$ is positive and increasing in $u>0$ . Thus, the required condition is satisfied.

Example 4.3. Consider the Archimedean copula with generator

\begin{eqnarray*} \phi(u) = 1-(1-e^{-u})^{\frac{1}{\delta_{3}}},\quad \delta_{3} \in\!\left[1,\infty\right), \; u>0,\end{eqnarray*}

which gives

\begin{eqnarray*} \frac{uR'(u)}{R(u)} = 1-u+\frac{(1-\delta_{3})ue^{-u}}{\delta_{3}(1-e^{-u})}+\frac{ue^{-u}(1-e^{-u})^{\frac{1}{\delta_{3}}-1}}{\delta_{3}\Big(1-(1-e^{-u})^{\frac{1}{\delta_{3}}}\Big)}, \; u>0.\end{eqnarray*}

Writing $n_{1}(u,\delta_3)=uR'(u)/R(u)$ and $n_{2}(u,\delta_3)={\partial }/{\partial u}(uR'(u)/R(u))$ , $u>0$ , $\delta_3 \in [1, 10]$ , we plot $n_{1}(\!-\!\ln\!(v), \delta_3)$ and $n_{2}(\!-\!\ln\!(v), \delta_3)$ against $ (v, \delta_3) \in(0,1] \times [1, 10]$ . From Figures 1a and 1b, we see that $n_{1}(\!-\!\ln\!(v), \delta_3)$ and $n_{2}(\!-\!\ln\!(v), \delta_3)$ are positive in $ (v, \delta_3) \in(0,1] \times [1, 10]$ , and hence $uR'(u)/R(u)$ is positive and increasing in $u>0$ for $1 \leq \delta_3 \leq 10 $ . Thus, the required condition is satisfied.

Figure 1. Plots of $n_1$ , $n_2$ , $n_3$ , $n_4$ , $n_5$ , $n_6$ , $n_7$ , and $n_8$ .

Example 4.4. Consider the Archimedean copula with generator

\begin{eqnarray*} \phi(u) = -\frac{1}{\delta_4}\ln\!\big(1+ e^{-u}\big(e^{-\delta_4}-1\big) \big),\quad \delta_{4} \in\big(\!-\!\infty,\infty\big)\setminus \{0\}, \; u>0,\end{eqnarray*}

which gives

\begin{eqnarray*} \frac{uR'(u)}{R(u)} = \frac{ u - e^{\delta_4} u + (1 - e^{\delta_4} - e^{u + \delta_4} (\!-\!1 + u)) \ln\!\left(1 + e^{-u} ( e^{-\delta_4} -\!1)\right)}{\!\left(1 - e^{\delta_4} + e^{u + \delta_4}\right) \ln\!\left(1 + e^{-u} ( e^{-\delta_4} -\!1 )\right) }, \; u>0.\end{eqnarray*}

Writing $n_{3}(u,\delta_4)=uR'(u)/R(u)$ and $n_{4}(u,\delta_4)={\partial }/{\partial u}(uR'(u)/R(u))$ , $u>0$ , $\delta_4 \in [\!-\!40, -30]$ , we plot $n_{3}(\!-\!\ln\!(v), \delta_4)$ and $n_{4}(\!-\!\ln\!(v), \delta_4)$ against $ (v, \delta_4) \in(0,1] \times [-40, -30]$ . From Figures 1c and 1d, we see that $n_{3}(\!-\!\ln\!(v), \delta_4)$ and $n_{4}(\!-\!\ln\!(v), \delta_4)$ are positive in $ (v, \delta_3) \in(0,1] \times [\!-\!40, -30]$ , and hence $uR'(u)/R(u)$ is positive and increasing in $u>0$ for $-40 \leq \delta_4 \leq -30 $ . Thus, the required condition is satisfied.

The next two examples demonstrate the condition given in Theorem 3.1(b), Theorem 3.2(b), Theorem 3.3(b), Theorem 3.5(b), and Theorem 3.6(b).

Example 4.5. Consider the Archimedean copula with generator

\begin{eqnarray*} \phi(u) = e^{-u^{\frac{1}{\delta_5}}},\quad\delta_5 \in\!\left[1,\infty\right), \; u>0,\end{eqnarray*}

which gives

\begin{eqnarray*} \frac{uH'(u)}{H(u)} = \frac{1}{\delta_5}\Big(1-u^{\frac{1}{\delta_5}}\Big)-\frac{u^{\frac{1}{\delta_5}}e^{-u^{\frac{1}{\delta_5}}}}{\delta_5\Big(1-e^{-u^{\frac{1}{\delta_5}}}\Big)}, \quad u>0. \end{eqnarray*}

Writing $n_{5}(u,\delta_5)=uH'(u)/H(u)$ and $n_{6}(u,\delta_5)={\partial }/{\partial u}(uH'(u)/H(u))$ , $u>0$ , $\delta_5 \in [13, 19]$ , we plot $n_{5}(\!-\!\ln\!(v), \delta_5)$ and $n_{6}(\!-\!\ln\!(v), \delta_5)$ against $ (v, \delta_5) \in(0,1] \times [13, 19]$ . From Figures 1e and 1f, we see that $n_{5}(\!-\!\ln\!(v), \delta_5)$ and $n_{6}(\!-\!\ln\!(v), \delta_5)$ are negative in $ (v, \delta_5) \in(0,1] \times [13, 19]$ , and hence $uH'(u)/H(u)$ is negative and decreasing in $u>0$ for $13 \leq \delta_5 \leq 19 $ . Thus, the required condition is satisfied.

Example 4.6. Consider the Archimedean copula with generator

\begin{eqnarray*} \phi(u) = \!\left(\delta_6 u +1\right)^{-\frac{1}{\delta_6}}, \quad\delta_6 \in\!\left[-1,\infty\right)\setminus \{0\}, \; u>0,\end{eqnarray*}

which gives

\begin{eqnarray*} \frac{uH'(u)}{H(u)} = \frac{-1 + \!\left(\delta_6 u +1\right)^{\frac{1}{\delta_6}}- u\!\left(\delta_6 u +1\right)^{\frac{1}{\delta_6}} }{\big(\delta_6 u +1\big) \bigg( \big(\delta_6 u +1\big)^{\frac{1}{\delta_6}} -1\bigg)}, \quad u>0. \end{eqnarray*}

Writing $n_{7}(u,\delta_6)=uH'(u)/H(u)$ and $n_{8}(u,\delta_6)={\partial }/{\partial u}(uH'(u)/H(u))$ , $u>0$ , $\delta_6 \in [0.2, 0.9]$ , we plot $n_{7}(\!-\!\ln\!(v), \delta_6)$ and $n_{8}(\!-\!\ln\!(v), \delta_6)$ against $ (v, \delta_6) \in(0,1] \times [0.2, 0.9]$ . From Figures 1g and 1h, we see that $n_{7}(\!-\!\ln\!(v), \delta_5)$ and $n_{8}(\!-\!\ln\!(v), \delta_6)$ are negative in $ (v, \delta_6) \in(0,1] \times [0.2, 0.9]$ , and hence $uH'(u)/H(u)$ is negative and decreasing in $u>0$ for $0.2 \leq \delta_6 \leq 0.9 $ . Thus, the required condition is satisfied.

Below we cite two examples that illustrate the condition given in Theorem 3.1(c), Theorem 3.2(c), Theorem 3.3(c), and Theorem 3.5(c).

Example 4.7. Consider the Archimedean copula with generator

\begin{eqnarray*} \phi(u) = e^{\frac{1}{\delta_7}\!\left(1-e^u\right)}, \quad \delta_7 \in\!\left(0,1\right],\;u>0. \end{eqnarray*}

Then

\begin{eqnarray*} \frac{uG'(u)}{G(u)} = \frac{\delta_7-e^u-ue^u}{\delta_7-e^u},\quad \frac{G(u)}{R(u)}=1-\frac{\delta_7}{e^u}, \quad u>0, \end{eqnarray*}

and

\begin{eqnarray*} n_9(u)\stackrel{\text{def.}}=\frac{\partial }{\partial u}\bigg(\frac{uG'(u)}{G(u)}\bigg) = \frac{e^u\left(e^u-\delta_7(1+u)\right)}{\!\left(\delta_7-e^u\right)^2}, \quad u>0. \end{eqnarray*}

It can easily be shown that ${uG'(u)}/{G(u)}$ is positive, $n_9(u)$ is positive, and ${G(u)}/{R(u)}$ is increasing in $u>0$ . Thus, ${uG'(u)}/{G(u)}$ is positive and increasing and ${G(u)}/{R(u)}$ is increasing in $u>0$ . Consequently, the required condition holds from Remark 3.1.

Example 4.8. Consider the Archimedean copula with generator

\begin{eqnarray*} \phi(u) = e^{1-\!\left(1+u\right)^{\frac{1}{\delta_{8}}}},\quad \delta_{8} \in\!\left(0,\infty\right) \; u>0. \end{eqnarray*}

From this, we have

\begin{eqnarray*} &&G(u)=-\frac{1}{\delta_{8}}u\!\left(1+u\right)^{\frac{1}{\delta_{8}}-1}+u\!\left(1+u\right)^{-1}\bigg(\frac{1}{\delta_{5}}-1\bigg),\quad u>0, \\[5pt] &&R(u)=-\frac{1}{\delta_{8}}u\!\left(1+u\right)^{\frac{1}{\delta_{8}}-1},\quad u>0, \\[5pt] && \frac{G(u)}{R(u)}=1-\frac{1-\delta_{8}}{\!\left(1+u\right)^{\frac{1}{\delta_{8}}}}, \quad u>0. \end{eqnarray*}

Writing $n_{10}(u,\delta_8)=uG'(u)/G(u)$ and $n_{11}(u,\delta_8)={\partial }/{\partial u}(uG'(u)/G(u))$ , $u>0$ , $\delta_8 \in [0.5, 0.9]$ , we plot $n_{10}(\!-\!\ln\!(v), \delta_8)$ and $n_{11}(\!-\!\ln\!(v), \delta_8)$ against $ (v, \delta_8) \in(0,1] \times [0.5, 0.9]$ . From Figures 2a and 2b, we see that $n_{10}(\!-\!\ln\!(v), \delta_8)$ and $n_{11}(\!-\!\ln\!(v), \delta_8)$ are positive in $ (v, \delta_8) \in(0,1] \times [0.5, 0.9]$ , and hence $uG'(u)/G(u)$ is positive and increasing in $u>0$ for $0.5 \leq \delta_8 \leq 0.9 $ . Furthermore, it can easily be shown that $G(u)/R(u)$ is increasing in $u>0$ for $0.5 \leq \delta_8 \leq 0.9 $ . Thus, the required condition is satisfied. Consequently, the required condition holds from Remark 3.1.

Figure 2. Plots of $n_{10}$ and $n_{11}$ .

5. Concluding remarks

Most systems used in real life are very complex in nature; consequently, the components of these systems are interdependent, and the failure of one component affects the performance of the remaining working components. One effective way to model these systems is by using developed sequential order statistics (DSOS). In this paper, we study ordering and ageing properties of DSOS, and discuss some numerical examples. Our study generalizes many well-known results that are available for generalized order statistics and ordinary order statistics.

The main idea of this paper is to study systems with dependent components where dependency structures are modeled by Archimedean copulas. Among all existing copulas, the family of Archimedean copulas is the most popular one because of its mathematical tractability and ability to capture a wide spectrum of dependency structures. Thus, the proposed ordering results for DSOS, governed by the Archimedean copula, may be useful for comparing failure times of different components of a given system. Moreover, we can use these results to compare the lifetimes of two or more systems with dependent components in a given scenario. The proposed results on stochastic ageings may be helpful for understanding how a system ages as time progresses.

In this paper, some of the results are developed for a specific model. For example, all of the results given in Theorem 3.5 are derived for the proportional hazard rate model. The study of the same problem in a general setup (i.e., with arbitrary cumulative distribution functions) would be an interesting topic for future work.

Appendix A

Proof of Lemma 2.3: We have

\begin{eqnarray*}&& P\!\left( \zeta (X) + \zeta (Y) > t \right) = \bar{F}_{\zeta \!\left( X \right)} \!\left( t \right) + \int_{0}^{t} \bar{F}_{Y} \Big( \zeta^{-1} (t-x)\Big) \; dF_{\zeta (X) }(x), \quad t>0, \\[5pt] &&P\!\left( \zeta \!\left(X + Y \right) > t \right) = \bar{F}_{\zeta \!\left( X \right)} \!\left( t \right) + \int_{0}^{t} \bar{F}_{Y} \Big( \zeta^{-1} (t) -\zeta^{-1} (x) \Big) \; dF_{\zeta (X) }(x), \quad t>0. \end{eqnarray*}

Since $\zeta$ is a strictly increasing, continuous, and superadditive function, we have that $\zeta^{-1}$ is a subadditive function (see Proposition 1 of Østerdal [Reference sterdal34]). Consequently, we have $\zeta^{-1} \!\left(x \right)+\zeta^{-1} \!\left(t-x \right) \geq \zeta^{-1} (t) $ for all $t\geq x > 0$ . This implies that

\begin{align*} \bar{F}_{Y} \Big(\zeta^{-1}(t-x)\Big) \leq \bar{F}_{Y} \Big( \zeta^{-1} (t) -\zeta^{-1}(x) \Big) \quad \text{ for all }t\geq x > 0,\end{align*}

which further implies

\begin{eqnarray*} \int_{0}^{t} \bar{F}_{Y} \Big(\zeta^{-1}(t-x)\Big) \; dF_{\zeta (X) }(x) \nonumber \leq \int_{0}^{t} \bar{F}_{Y} \Big( \zeta^{-1} (t) -\zeta^{-1}(x) \Big) \; dF_{\zeta (X) }(x)\quad \text{ for all } t>0, \end{eqnarray*}

and hence the result is proved. $\Box$

Proof of Theorem 3.1(a): From Remark 2.2, we have

(6) \begin{eqnarray} \bar{F}_{X^{\star}_{i+1:n} }(t) = \bar{F}_{X^{\star}_{i:n}}(t) + \int_{0}^{t} \phi\bigg((n-i)\psi\bigg(\frac{\bar{F}_{i+1}(t)}{\bar{F}_{i+1}(z)}\bigg)\bigg) {f}_{X^{\star}_{i:n}}(z) dz, \quad t>0, \end{eqnarray}

which gives

(7)

for all $t>0$ , where $r_{i+1}$ is the hazard rate function of $F_{i+1}$ . Now, from the given condition $X^{\star}_{i:n} \;\leq_{hr } \; Y^{(i+1)}_{1: n-i}$ , we get

(8) \begin{eqnarray} r_{i+1}(t)\frac{R\big((n-i)\psi\big(\bar{F}_{i+1}(t)\big)\big)}{R\big(\psi\big(\bar{F}_{i+1}(t)\big)\big)} \leq {r}_{X^{\star}_{i:n}}(t) \quad \text{ for all } t>0.\end{eqnarray}

Furthermore, note that $\bar{F}_{i+1}(t) \leq {\bar{F}_{i+1}(t)}/{\bar{F}_{i+1}(z)}$ for all $t \geq z >0$ . This implies that

(9) \begin{eqnarray} \psi\big(\bar{F}_{i+1}(t)\big) \geq \psi\bigg( \frac{\bar{F}_{i+1}(t)}{\bar{F}_{i+1}(z)} \bigg)\quad \text{ for all } t\geq z>0.\end{eqnarray}

Again, from the condition that ${u R' (u)}/{R(u)}$ is increasing in $u>0$ , we get

(10) \begin{eqnarray} \frac{R\left((n-i)u\right)}{R(u)} \text{ is increasing in } u>0.\end{eqnarray}

Thus, from (Reference Block and Savits9) and (Reference Kamps and Cramer26), we get

\begin{eqnarray*} \frac{R\Big((n-i)\psi\Big(\frac{\bar{F}_{i+1}(t)}{\bar{F}_{i+1}(z)}\Big)\Big)}{R\left(\psi\!\left(\frac{\bar{F}_{i+1}(t)}{\bar{F}_{i+1}(z)}\right)\right)} \leq \frac{R\left((n-i)\psi\!\left(\bar{F}_{i+1}(t)\right)\right)}{R\left(\psi\!\left(\bar{F}_{i+1}(t)\right)\right)}\quad \text{ for all } t>0.\end{eqnarray*}

On using the above inequality and (Reference Belzunce, Mercader and Ruiz8), we get

(11) \begin{eqnarray} \frac{{r}_{X^{\star}_{i:n}}(t)\int_{0}^{t} \phi\!\left((n-i)\psi\!\left(\frac{\bar{F}_{i+1}(t)}{\bar{F}_{i+1}(z)}\right)\right) {f}_{X^{\star}_{i:n}}(z) \,dz}{\displaystyle \int_{0}^{t} \frac{R\left((n-i)\psi\!\left(\frac{\bar{F}_{i+1}(t)}{\bar{F}_{i+1}(z)}\right)\right)}{R\left(\psi\!\left(\frac{\bar{F}_{i+1}(t)}{\bar{F}_{i+1}(z)}\right)\right)} \phi\!\left((n-i)\psi\!\left(\frac{\bar{F}_{i+1}(t)}{\bar{F}_{i+1}(z)}\right)\right)r_{i+1}(t){f}_{X^{\star}_{i:n}}(z) \,dz} \geq 1\quad \text{ for all } t>0.\end{eqnarray}

Furthermore, we have

(12) \begin{eqnarray} \frac{{f}_{X^{\star}_{i:n}}(t)}{\displaystyle \int_{0}^{t} \frac{R\left((n-i)\psi\!\left(\frac{\bar{F}_{i+1}(t)}{\bar{F}_{i+1}(z)}\right)\right)}{R\left(\psi\!\left(\frac{\bar{F}_{i+1}(t)}{\bar{F}_{i+1}(z)}\right)\right)} \phi\!\left((n-i)\psi\!\left(\frac{\bar{F}_{i+1}(t)}{\bar{F}_{i+1}(z)}\right)\right)r_{i+1}(t){f}_{X^{\star}_{i:n}}(z) \,dz} \geq 0 \quad \text{ for all }t\gt0.\end{eqnarray}

On using (Reference Burkschat11) and (Reference Burkschat, Kamps and Kateri12) in (Reference Belzunce, Mercader and Ruiz7), we get $ {r}_{X^{\star}_{i:n}}(t)\; \geq \; {r}_{X^{\star}_{i+1:n}}(t)$ for all $t>0$ . Hence the result is proved. $\Box$

Proof of Theorem 3.1(c): From (Reference Belzunce, Franco, Ruiz and Ruiz6), we get

for all $t>0$ , where $r_{i+1}$ is the hazard rate function of $F_{i+1}$ . Now, from the condition $X^{\star}_{i:n} \; \leq_{rh } \; Y^{(i+1)}_{1: n-i}$ , we get

\begin{eqnarray*}\frac{f_{Y^{(i+1)}_{1:n-1}}(t)}{{f}_{X^{\star}_{i:n}}(t)} \text{ is increasing in } t>0.\end{eqnarray*}

Thus, to prove the result, it suffices to show that

\begin{eqnarray*}&& \frac{\displaystyle \int t_{0}^{t} \frac{R\left((n-i)\psi\!\left(\frac{\bar{F}_{i+1}(t)}{\bar{F}_{i+1}(z)}\right)\right)}{R\left(\psi\!\left(\frac{\bar{F}_{i+1}(t)}{\bar{F}_{i+1}(z)}\right)\right)} \phi\!\left((n-i)\psi\!\left(\frac{\bar{F}_{i+1}(t)}{\bar{F}_{i+1}(z)}\right)\right)r_{i+1}(t){f}_{X^{\star}_{i:n}}(z) \,dz}{f_{Y^{(i+1)}_{1:n-1}}(t)} \nonumber \end{eqnarray*}

is increasing in $t>0$ , or equivalently,

(13)

Note that $\bar{F}_{i+1}(t) \leq {\bar{F}_{i+1}(t)}/{\bar{F}_{i+1}(z)}$ for all $t \geq z >0$ . Then, from the condition that ${G (nu)}/{R(u)} - {G (u)}/{R(u)}$ is positive and increasing in $u>0$ , we get

\begin{align*} &\frac{ \frac{\partial}{\partial t}\!\left(\phi'\!\left((n-i)\psi\!\left(\frac{\bar{F}_{i+1}(t)}{\bar{F}_{i+1}(z)}\right)\right)(n-i)\psi'\!\left(\frac{\bar{F}_{i+1}(t)}{\bar{F}_{i+1}(z)}\right)\right)}{ \phi'\!\left((n-i)\psi\!\left(\frac{\bar{F}_{i+1}(t)}{\bar{F}_{i+1}(z)}\right)\right)(n-i)\psi'\!\left(\frac{\bar{F}_{i+1}(t)}{\bar{F}_{i+1}(z)}\right)} \\[5pt] & \qquad \geq r_{i+1}(t)\Bigg(\frac{G\!\left(\psi\!\left(\bar{F}_{i+1}(t)\right)\right)}{R\left(\psi\!\left(\bar{F}_{i+1}(t)\right)\right)}-\frac{G\!\left((n-i)\psi\!\left(\bar{F}_{i+1}(t)\right)\right)}{R\left(\psi\!\left(\bar{F}_{i+1}(t)\right)\right)}\Bigg)\end{align*}

for all $t>0$ , which implies that

(14)

Furthermore, we have

(15)

On combining (Reference Burkschat and Navarro14) and (Reference Burkschat and Navarro15), we get (Reference Burkschat and Navarro13), and hence the result is proved. $\Box$

Proof of Theorem 3.1(d): The mean residual lifetime function of $\!\left(X^{\star}_{i+1:n}|X^{\star}_{i:n}= z\right)$ is given by

\begin{eqnarray*} m_{X^{\star}_{i+1:n}}(t|z) &=& \begin{cases} z-t+m_{X^{(i+1)}_{1:n-i}}\!\left(z | z\right) & \text{ if } 0 < t \leq z < \infty\\[5pt] m_{X^{(i+1)}_{1:n-i}}(t|z) & \text{ if } 0< z \leq t < \infty, \end{cases}\end{eqnarray*}

where $m_{X^{(i+1)}_{1:n-i}}\!\left(\cdot|z\right)$ is the mean residual lifetime function of $X^{(i+1)}_{1:n-i}$ , given $X^{\star}_{i:n}= z$ . Note that $\bar{F}_{i+1}(t) \leq {\bar{F}_{i+1}(t)}/{\bar{F}_{i+1}(z)}$ for all $t,z >0$ . Then, from the condition that ${u R' (u)}/{R(u)}$ is increasing in $u>0$ , we get

\begin{eqnarray*} \frac{\phi\Big((n-i)\psi\Big(\frac{\bar{F}_{i+1}(u)}{\bar{F}_{i+1}(z)}\Big)\Big)}{\phi\!\left((n-i)\psi\!\left(\bar{F}_{i+1}(u)\right)\right)} \text{ is increasing in } u >0.\end{eqnarray*}

This implies that $Y^{(i+1)}_{1:n-i} \; \leq_{hr} \; X^{(i+1)}_{1:n-1}$ , which further implies that $Y^{(i+1)}_{1:n-i} \; \leq_{mrl} \; X^{(i+1)}_{1:n-1}$ . Consequently, we have

\begin{align*} m_{Y^{(i+1)}_{1:n-i}}(t)\leq m_{X^{(i+1)}_{1:n-i}}(t|z)\end{align*}

for all $t>0$ . On using this and the condition $X^{\star}_{i:n} \; \leq_{mrl } \; Y^{(i+1)}_{1: n-i}$ , we get

\begin{align*} m_{X^{\star}_{i:n}}(t) \leq m_{Y^{(i+1)}_{1:n-i}}(t) \leq m_{X^{\star}_{i+1:n}}(t|z), \end{align*}

for $0< z \leq t < \infty$ . Furthermore, for $0 < t \leq z < \infty$ we have

\begin{eqnarray*} m_{X^{\star}_{i:n}}(t) &\leq& z-t + m_{X^{\star}_{i:n}}(z) \nonumber \\[5pt] &\leq& z-t + m_{Y^{(i+1)}_{1:n-i}}(z) \nonumber \\[5pt] &\leq& m_{X^{\star}_{i+1:n}}(t|z),\end{eqnarray*}

where the first inequality follows from the fact that $t+m(t)$ is increasing in $t>0$ , for any mean residual lifetime function $m(\!\cdot\!)$ ; the second inequality follows from the condition that $X^{\star}_{i:n} \; \leq_{mrl } \; Y^{(i+1)}_{1: n-i}$ ; and the third inequality follows from the fact that

\begin{align*} m_{Y^{(i+1)}_{1:n-i}}(t)\leq m_{X^{(i+1)}_{1:n-i}}(t|z).\end{align*}

Finally, by combining the two cases, we get

\begin{align*} m_{X^{\star}_{i:n}}(t) \leq m_{X^{\star}_{i+1:n}}(t|z) \quad \text{ for all } t>0,\end{align*}

which further implies $m_{X^{\star}_{i:n}}(t)\leq m_{X^{\star}_{i+1:n}}(t)$ for all $t>0$ . Hence, the result is proved. $\Box$

Proof of Theorem 3.2(b): Note that the reverse hazard rate order is closed under increasing transformations. Thus, $X^{\star}_{i:n} \leq_{rh} X^{\star}_{i+1:n}$ holds if and only if $D_{i+1}\!\left(X^{\star}_{i:n}\right) \leq_{rh} D_{i+1}\!\left(X^{\star}_{i+1:n}\right)$ ; here $D_{i+1}$ is the cumulative hazard rate function of $F_{i+1}$ . Furthermore, from Lemma 2.2, we have $X^{\star}_{i+1:n} = D_{i+1}^{-1}\!\left(W^{( i+1) } + D_{i+1}\!\left(X_{i:n}^{\star}\right)\right)$ . Thus, the above inequality holds if and only if $D_{i+1}\!\left(X^{\star}_{i:n}\right) \leq_{rh} W^{( i+1) } + D_{i+1}\!\left(X_{i:n}^{\star}\right)$ . Now, we have that $Y_l^{ (i+1) }$ , $l=1,2, \dots, n-i$ , and $X^{\star}_{i:n}$ are independent, which implies that $W^{(i+1)}$ and $D_{i+1}\!\left(X^{\star}_{i:n}\right)$ are independent. Moreover, $W^{( i+1) } $ is a non-negative random variable. Thus, in view of Theorem 1.B.44 of Shaked and Shanthikumar [Reference Shaked and Shanthikumar36], the result follows (i.e., the above inequality holds) provided that $D_{i+1}\!\left(X_{i:n}^{\star}\right)$ is DRFR. We now proceed to prove the statement ‘ $D_{i+1}\!\left(X_{i:n}^{\star}\right)$ is DRFR’ using induction. Note that

\begin{align*} \tilde{\Delta}_{D_{2}(X^{\star}_{1:n})} (t) = - \!\ln\!\Bigg( 1- \phi\Bigg(n\psi\Bigg(e^{- \!\left(D_1 \circ D_2^{-1}\right)\!\left( t \right) }\Bigg)\Bigg) \Bigg), \quad t>0,\end{align*}

which gives

(16) \begin{align} \frac{\partial^2}{\partial t^2}\Big(\tilde{ \Delta }_{D_{2}\!\left(X^{\star}_{1:n}\right)}(t)\Big) &= -\bigg( \frac{\partial u}{\partial t} \bigg)^2 \frac{\partial }{\partial u} \bigg(\frac{H\!\left(n\psi(e^{-u})\right)}{H\!\left(\psi(e^{-u})\right)} \times \frac{e^{-u}} {1- e^{-u}}\bigg) \nonumber \\[5pt] & \quad - \bigg(\frac{H\!\left(n\psi(e^{-u})\right)}{H\!\left(\psi(e^{-u})\right)} \times \frac{e^{-u}} {1- e^{-u}}\bigg) \frac{\partial^2 u}{\partial t^2}, \quad t>0,\end{align}

where $u= \left(D_1 \circ D_2^{-1}\right) (t)$ . Now, from the condition that ${u H' (u)}/{H(u)}$ is decreasing in $u>0$ , we get that ${ H (nu)}/{H(u)}$ is positive and decreasing in $u>0$ . This further implies that

(17) \begin{eqnarray} \frac{H\!\left(n\psi(e^{-u})\right)}{H\!\left(\psi(e^{-u})\right)} \text{ is positive and decreasing in } u>0. \end{eqnarray}

In addition, we have

(18) \begin{eqnarray} \frac{e^{-u}} {1- e^{-u}} \text{ is positive and decreasing in } u>0. \end{eqnarray}

Thus, from (Reference Lai and Xie27) and (Reference Li and Fang28), we get

(19) \begin{eqnarray} \frac{H\!\left(n\psi(e^{-u})\right)}{H\!\left(\psi(e^{-u})\right)}\times \frac{e^{-u}} {1- e^{-u}} \text{ is positive and decreasing in } u>0. \end{eqnarray}

Again, from the condition $F_1 \geq_c F_2$ and the fact that $D_2^{-1}\!\left(\cdot\right)$ is increasing, we get

(20) \begin{eqnarray}\frac{\partial^2 u}{\partial t^2} = \frac{\partial}{\partial t} \!\left( \frac{r_1\Big(D_{2}^{-1}(t)\Big)}{r_{2}\Big(D_{2}^{-1}(t)\Big)} \right) \leq 0\quad \text{ for all } t>0,\end{eqnarray}

where $r_i$ is the hazard rate function of $F_i$ , $i=1,2$ . Using (Reference Cramer and Kamps19) and (Reference Franco, Ruiz and Ruiz20) in (Reference Burkschat and Torrado16), we get that $D_{2}\!\left(X^{\star}_{1:n}\right)$ is DRFR. Thus, the statement is true for $i=1$ . Next we assume that the statement is true for $i=j-1$ , i.e., $D_{j}\!\left(X^{\star}_{j-1:n}\right)$ is DRFR. Now, from Lemma 2.2, we get $D_{j+1}\!\left(X^{\star}_{j:n}\right) = \left(D_{j+1} \circ D_j^{-1}\right)\!\left(Q^{(j)}\right) ,$ where $Q^{(j)} = W^{(j)} + D_{j}\!\left(X^{\star}_{j-1:n}\right)$ . Then we have

(21) \begin{eqnarray} \tilde{\Delta}_{D_{j+1}\!\left(X^{\star}_{j:n}\right)} (t) =\tilde{\Delta}_{ Q^{(j)} }\Big(\Big(D_{j} \circ D_{j+1}^{-1}\Big) (t) \Big). \end{eqnarray}

Again, by using (Reference Barmalzan, Haidari and Balakrishnan5) and the condition that ${u H' (u)}/{H(u)}$ is decreasing in $u>0$ , we get

\begin{eqnarray*} \frac{\partial^2} { \partial t^2} \tilde{\Delta}_{W^{(j)}} (t) =- \frac{\partial} { \partial t} \!\left(\frac{H\!\left( \!\left( n-j+1 \right) \psi\!\left(e^{-t}\right)\right)}{H\!\left(\psi\!\left(e^{-t}\right)\right)} \times \frac{ e^{-t} } { 1-e^{-t } }\right) \geq 0\quad \text{ for all } t>0, \end{eqnarray*}

which implies that $W^{(j)}$ is DRFR. Consequently, $Q^{(j)}$ is DRFR, and hence $\tilde{ \Delta}_{Q^{(j)}} (t)$ is decreasing and convex in $t>0.$ Again, proceeding in a similar manner as in the $i=1$ case, one can easily obtain that $D_j \circ D_{j+1}^{-1} (t)$ is concave in $t>0$ . Applying these facts in (Reference Franco, Ruiz and Ruiz21), we get that $\tilde{ \Delta}_{D_{i+1}\!\left(X^{\star}_{i:n}\right)}$ is convex in $t>0$ . Consequently, $D_{j+1}\!\left(X^{\star}_{j:n}\right)$ is DRFR, and hence the statement is proved for $i=j$ . Thus, by induction, we get that $D_{i+1}\!\left(X^{\star}_{i:n}\right)$ is DRFR for all i. Hence the result is proved. $\Box$

Proof of Theorem 3.2(c): Note that the likelihood ratio order is closed under increasing transformations. Thus, $X^{\star}_{i:n} \leq_{lr} X^{\star}_{i+1:n}$ holds if and only if $D_{i+1}\!\left(X^{\star}_{i:n}\right) \leq_{lr} D_{i+1}\!\left(X^{\star}_{i+1:n}\right)$ ; here $D_{i+1}$ is the cumulative hazard rate function of $F_{i+1}$ . Furthermore, from Lemma 2.2, we have $X^{\star}_{i+1:n} = D_{i+1}^{-1}\!\left(W^{( i+1) } + D_{i+1}\!\left(X_{i:n}^{\star}\right)\right)$ . Thus, the above inequality holds if and only if $D_{i+1}\!\left(X^{\star}_{i:n}\right) \leq_{lr} W^{( i+1) } + D_{i+1}\!\left(X_{i:n}^{\star}\right)$ . Now, we have that $Y_l^{ (i+1) }$ , $l=1,2, \dots, n-i$ , and $X^{\star}_{i:n}$ are independent, which implies that $W^{(i+1)}$ and $D_{i+1}\!\left(X^{\star}_{i:n}\right)$ are independent. Moreover, $W^{( i+1) } $ is a non-negative random variable. Thus, in view of Theorem 1.C.9 of Shaked and Shanthikumar [Reference Shaked and Shanthikumar36], the result follows (i.e., the above inequality holds) provided that $D_{i+1}\!\left(X_{i:n}^{\star}\right)$ is ILR. We now proceed to prove the statement ‘ $D_{i+1}\!\left(X_{i:n}^{\star}\right)$ is ILR’ using induction. We have

\begin{eqnarray*} \frac{f'_{D_{2}\!\left(X^{\star}_{1:n}\right)} (t) } {f_{D_{2}\!\left(X^{\star}_{1:n}\right)} (t) } &=& - \frac{ \alpha_1 } {\alpha_2 }\!\left[ \frac{ n\psi\!\left(e^{- \frac{ \alpha_1 } {\alpha_2 } t }\right) \phi '' \!\left(n\psi\!\left(e^{- \frac{ \alpha_1 } {\alpha_2 } t }\right)\right)} { \phi ' \!\left(n\psi\!\left(e^{- \frac{ \alpha_1 } {\alpha_2 } t }\right)\right)} \frac { e^{- \frac{ \alpha_1 } {\alpha_2 } t } \psi ' \!\left(e^{- \frac{ \alpha_1 } {\alpha_2 } t }\right) } {\psi \!\left(e^{- \frac{ \alpha_1 } {\alpha_2 } t }\right) } \right. \\[5pt] &&+\!\left. \frac { e^{- \frac{ \alpha_1 } {\alpha_2 } t } \psi '' \!\left(e^{- \frac{ \alpha_1 } {\alpha_2 } t }\right) } {\psi ' \!\left(e^{- \frac{ \alpha_1 } {\alpha_2 } t }\right) } + 1 \right] \\[5pt] &=& - \frac{ \alpha_1 } {\alpha_2 } \!\left[ \frac{ G \!\left(n\psi\!\left(e^{- \frac{ \alpha_1 } {\alpha_2 } t }\right)\right)} { R\left(\psi\!\left(e^{-\frac{ \alpha_1 } {\alpha_2 } t }\right)\right) } - \frac{ G \!\left( \psi\!\left(e^{- \frac{ \alpha_1 } {\alpha_2 } t }\right)\right)} { R\left(\psi\!\left(e^{- \frac{ \alpha_1 } {\alpha_2 } t }\right)\right) } + 1 \right] , \quad t>0. \end{eqnarray*}

Note that $\psi\!\left( \cdot\right)$ is a decreasing function. Thus, from the condition that ${G (nu)}/{R(u)} - {G (u)}/{R(u)}$ is positive and increasing in $u>0$ , we get that ${f'_{D_{2}\!\left(X^{\star}_{1:n}\right)} (t) } / {f_{D_{2}\!\left(X^{\star}_{1:n}\right)} (t) }$ is decreasing in $t>0$ , and hence $D_{2}\!\left(X^{\star}_{1:n}\right)$ is ILR. Thus, the statement is true for $i=1$ . Now we assume that the statement is true for $i=j-1$ , i.e. $D_{j}\!\left(X^{\star}_{j-1:n}\right)$ is ILR. Next, by using this, we proceed to show that $D_{j+1}\!\left(X^{\star}_{j:n}\right)$ is ILR. From Lemma 2.2, we get

\begin{eqnarray*} D_{j+1}\!\left(X^{\star}_{j:n}\right) = \frac{ \alpha_{j+1} } {\alpha_{j} } Q^{(j)} , \end{eqnarray*}

where $Q^{(j)} = W^{(j)} + D_{j}\!\left(X^{\star}_{j-1:n}\right)$ . Furthermore, we have that $Y_l^{ (j) }$ , $l=1,2,\dots, n-j+1$ , and $X^{\star}_{j-1:n}$ are independent. This implies that $W^{(j)}$ and $D_{j}\!\left(X^{\star}_{j-1:n}\right)$ are independent. Again, by using (Reference Barmalzan, Haidari and Balakrishnan5) and the condition that ${G (nu)}/{R(u)} - {G (u)}/{R(u)}$ is positive and increasing in $u>0$ , we get

\begin{eqnarray*} \frac{\partial} { \partial t} \!\left( \frac{ f '_{W^{(j)}} (t) } { f_{W^{\!\left(j\right)}} (t) } \right) &=& - \frac{\partial} { \partial t} \!\left[ \frac{ G \!\left(n\psi\!\left(e^{- t }\right)\right)} { R\left(\psi\!\left(e^{- t }\right)\right) } - \frac{ G \!\left( \psi\!\left(e^{- t }\right)\right)} { R\left(\psi\!\left(e^{- t }\right)\right) } + 1 \right] \\[5pt] & \leq & 0\quad \text{ for all } t>0, \end{eqnarray*}

which implies that $W^{(j)}$ is ILR. Furthermore, from the induction hypothesis, we have that $D_{j}\!\left(X^{\star}_{j-1:n}\right)$ is ILR. On combining these two facts, we get that $Q^{(j)}$ is ILR. This implies that $D_{j+1}\!\left(X^{\star}_{j:n}\right)$ is ILR, and hence the statement is proved for $i=j$ . Thus, by induction, we get that $D_{i+1}\!\left(X^{\star}_{i:n}\right)$ is ILR for all i. Hence, the result is proved. $\Box$

Proof of Theorem 3.4(a): We prove the result using induction. It can easily be shown that the result is true for $k = 1$ . Next, we assume that the result is true for $k = j -1$ , i.e., $X^{\star}_{j-1:n} \; \leq_{st} \; Z^{\star}_{j-1:n}$ . Now, from (Reference Barlow and Proschan4), we have $X^{\star}_{j:n} = D_{j}^{-1}\!\left(W^{(j)} + D_{j}\!\left(X^{\star}_{j-1:n}\right)\right)$ and $Z^{\star}_{j:n} = B_{j}^{-1}\!\left(T^{(j)} + B_{j}\!\left(Z^{\star}_{j-1:n}\right)\right)$ , where $D_j$ and $B_j$ are the cumulative hazard rate functions of $F_j$ and $G_j$ , respectively, and $T^{(j)} \overset{st}{=} W^{(j)}$ . Again, the usual stochastic order is closed under increasing transformations. Thus, to prove that $X^{\star}_{j:n} \; \leq_{st} \; Z^{\star}_{j:n}$ , it suffices to show that

\begin{align*} W^{(j)} + D_{j}\!\left(X^{\star}_{j-1:n}\right) \; \leq_{st} \; \!\left( D_{j} \circ B_{j}^{-1}\right) \!\left(T^{(j)} + B_{j}\!\left(Z^{\star}_{j-1:n}\right)\right).\end{align*}

Now, we have that $Y_l^{ (j) }$ , $l=1,2, \dots, n-j+1$ , and $X^{\star}_{j-1:n}$ are independent. This implies that $W^{(j)}$ and $D_{j}\!\left(X^{\star}_{j-1:n}\right)$ are independent. Similarly, $T^{(j)}$ and $B_{j}\!\left(Z^{\star}_{j-1:n}\right)$ are also independent. From the condition $F_j \leq_{st} G_j$ , we get $D_{j}^{-1}\!\left( t \right) \leq B_{j}^{-1}\!\left( t \right) $ for all $t>0$ , which, in view of Theorem 1.A.2 of Shaked and Shanthikumar [Reference Shaked and Shanthikumar36], implies $W^{(j)} \; \leq_{st} \; \!\left( D_{j} \circ B_{j}^{-1}\right) \!\left(W^{(j)} \right)$ , or equivalently, $W^{(j)} \; \leq_{st} \; \!\left( D_{j} \circ B_{j}^{-1}\right) \!\left(T^{(j)} \right)$ . Furthermore, from the inductive hypothesis, we have that $X^{\star}_{j-1:n} \; \leq_{st} \; Z^{\star}_{j-1:n}$ , which implies that $D_{j} \!\left(X^{\star}_{j-1:n} \right) \; \leq_{st} \; D_{j} \!\left(Z^{\star}_{j-1:n} \right)$ . Then, by combining these two facts, we get

(22) \begin{eqnarray}W^{(j)} + D_{j}\!\left(X^{\star}_{j-1:n}\right) \; \leq_{st} \; \!\left( D_{j} \circ B_{j}^{-1}\right) \!\left(T^{(j)} \right) + \!\left( D_{j} \circ B_{j}^{-1}\right) \!\left( B_{j}\!\left(Z^{\star}_{j-1:n}\right)\right).\end{eqnarray}

Furthermore, from the condition $ F_j \leq_{su} G_j$ , we get that $\!\left( D_{j} \circ B_{j}^{-1}\right)\!\left( u\right)$ is strictly increasing and superadditive in $u>0$ . Therefore, from Lemma 2.3, we get

(23) \begin{eqnarray}\!\left( D_{j} \circ B_{j}^{-1}\right) \!\left(T^{(j)} \right) + \!\left( D_{j} \circ B_{j}^{-1}\right) \!\left( B_{j}\!\left(Z^{\star}_{j-1:n}\right)\right) \; \leq_{st} \; \!\left( D_{j} \circ B_{j}^{-1}\right) \!\left(T^{(j)} + B_{j}\!\left(Z^{\star}_{j-1:n}\right)\right).\end{eqnarray}

By combining (Reference Hazra, Kuiti, Finkelstein and Nanda22) and (Reference Hazra, Kuiti, Finkelstein and Nanda23), we get

\begin{align*} W^{(j)} + D_{j}\!\left(X^{\star}_{j-1:n}\right) \; \leq_{st} \; \!\left( D_{j} \circ B_{j}^{-1}\right) \!\left(T^{(j)} + B_{j}\!\left(Z^{\star}_{j-1:n}\right)\right),\end{align*}

and hence the result $X^{\star}_{k:n} \; \leq_{st} \; Z^{\star}_{k:n}$ is proved for $k=j$ . Thus, by induction, we conclude that the result is true for all $k=1,2,\dots,i$ . $\Box$

Proof of Theorem 3.4(b): We prove the result using induction. It can easily be shown that the result is true for $k = 1$ . Next, we assume that the result is true for $k = j -1$ , i.e., $X^{\star}_{j-1:n} \; \leq_{st} \; Z^{\star}_{j-1:n}$ . From Remark 2.2, we have

\begin{eqnarray*} \bar{F}_{X^{\star}_{j:n} }(t)= \int_{0}^{\infty} k_1\!\left(t, z\right) {f}_{X^{\star}_{j-1:n}}(z) dz\text{ and }\bar{F}_{Z^{\star}_{j:n} }(t)= \int_{0}^{\infty} k_2\!\left(t, z\right) {f}_{Z^{\star}_{j-1:n}}(z) dz, \quad t>0, \end{eqnarray*}

where

\begin{eqnarray*} &&k_1\!\left( t,z\right)= \begin{cases} \phi\!\left((n-j+1) \psi\!\left(\frac{\bar{F}_j\!\left( t \right)}{\bar{F}_j\!\left( z\right)}\right)\right) & \text{if $ t \geq z$,}\\[5pt] 1 & \text{if $ t < z$},\end{cases} \\[5pt] &&k_2\!\left( t,z\right)= \begin{cases} \phi\!\left((n-j+1) \psi\!\left(\frac{\bar{G}_j\!\left( t \right)}{\bar{G}_j\!\left( z\right)}\right)\right) & \text{if $ t \geq z$,}\\[5pt] 1 & \text{if $ t < z$}.\end{cases}\end{eqnarray*}

Now, from the fact that $\phi$ is decreasing, we have that $k_1\!\left( t, z\right)$ is increasing in $z>0$ , for all $t>0$ . By using this and the induction hypothesis (that $X^{\star}_{j-1:n} \; \leq_{st} \; Z^{\star}_{j-1:n}$ ), from Theorem 1.A.3(a) of Shaked and Shanthikumar [Reference Shaked and Shanthikumar36] we get that $k_1(t,X^{\star}_{j-1:n})\leq_{st} k_1(t,Z^{\star}_{j-1:n})$ , which further implies

(24) \begin{eqnarray} \int_{0}^{\infty} k_1\!\left( t, z\right) {f}_{X^{\star}_{j-1:n}}(z) \,dz \; \leq \; \int_{0}^{\infty} k_1\!\left( t, z\right) {f}_{Z^{\star}_{j-1:n}}(z) \,dz\quad \text{ for all } t>0. \end{eqnarray}

Again, from the condition $F_j \leq_{hr} G_j$ , we have ${\bar{F}_{j}(t)}/{\bar{F}_{j}(z)} \leq {\bar{G}_{j}(t)}/{\bar{G}_{j}(z)}$ for all $0\leq z\leq t$ , which further implies $k_1\!\left( t, z\right) \leq k_2\!\left( t, z\right)$ for all $z,t>0.$ Again, this implies

(25) \begin{eqnarray} \int_{0}^{\infty} k_1\!\left( t, z\right) {f}_{Z^{\star}_{j-1:n}}(z) dz \; \leq \; \int_{0}^{\infty} k_2\!\left( t, z\right) {f}_{Z^{\star}_{j-1:n}}(z) dz\quad \text{ for all } t>0. \end{eqnarray}

Finally, by combining (Reference Hu and Zhuang24) and (Reference Kamps25), we get $\bar{F}_{X^{\star}_{j:n} }(t)\leq \bar{F}_{Z^{\star}_{j:n} }(t)$ for all $t>0$ , and hence $X^{\star}_{k:n} \; \leq_{st} \; Z^{\star}_{k:n}$ is proved for $k=j$ . Thus, by induction, we conclude that $X^{\star}_{k:n} \; \leq_{st} \; Z^{\star}_{k:n}$ is true for all $k=1,2,\dots,i$ . Hence, the result is proved. $\Box$

P