Hostname: page-component-848d4c4894-ndmmz Total loading time: 0 Render date: 2024-05-01T03:38:29.471Z Has data issue: false hasContentIssue false

On the dynamic residual measure of inaccuracy based on extropy in order statistics

Published online by Cambridge University Press:  11 January 2024

M. Mohammadi
Affiliation:
Department of Statistics, University of Zabol, Zabol, Iran
M. Hashempour*
Affiliation:
Department of Statistics, University of Hormozgan, Bandar Abbas, Iran
O. Kamari
Affiliation:
Department of Business Management, University of Human Development, Sulaymaniyah, Iraq
*
Corresponding author: Majid Hashempour; Email: ma.hashempour@hormozgan.ac.ir
Rights & Permissions [Opens in a new window]

Abstract

In this paper, we introduce a novel way to quantify the remaining inaccuracy of order statistics by utilizing the concept of extropy. We explore various properties and characteristics of this new measure. Additionally, we expand the notion of inaccuracy for ordered random variables to a dynamic version and demonstrate that this dynamic information measure provides a unique determination of the distribution function. Moreover, we investigate specific lifetime distributions by analyzing the residual inaccuracy of the first-order statistics. Nonparametric kernel estimation of the proposed measure is suggested. Simulation results show that the kernel estimator with bandwidth selection using the cross-validation method has the best performance. Finally, an application of the proposed measure on the model selection is provided.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press.

1. Introduction

Consider a set of nonnegative continuous random variables represented as $X_1, X_2, \ldots ,X_n$. These random variables are independent and identically distributed (i.i.d.) and follow a distribution characterized by the cumulative distribution function (CDF) $F_X(x)$, the probability density function (PDF) $f_X(x)$, and the survival function (sf) $\bar{F}_X(x)=1-F_X(x)$. The support interval, denoted as SX, represents the range of values for which this distribution is defined. The order statistics (OS) refer to arranging these random variables Xi in nondecreasing order. More specifically, we denote this arrangement as $X_{1:n} \leq X_{2:n} \leq \cdots \leq X_{n:n}$. For additional information and more detailed explanations, please refer to the references provided such as [Reference Arnold, Balakrishnan and Nagaraja2, Reference David and Nagaraja5]. OS has found applications in various fields, such as strength of materials, statistical inference, reliability theory, goodness-of-fit tests, quality control, outlier detection, and the characterization of probability distributions. These statistics have been utilized in diverse problem domains to address a range of issues and provide valuable insights. For example, in reliability theory, OS is used for statistical modelings. The ith OS in a sample of size n represents the lifetime of an $(n- i + 1)$-out-of-n system.

Suppose that X and Y are two nonnegative random variables representing time to failure of two systems with PDFs f(x) and g(x), respectively. Let F(x) and G(x) be failure distributions, and let $\bar{F}(x) $ and $\bar{G}(x) $ be sfs of X and Y, respectively. Shannon’s [Reference Shannon31] measure of uncertainty associated with the random variable X and Kerridge’s [Reference Kerridge14] measure of inaccuracy are given by:

(1)\begin{equation} {H}(X)=-E[\log f(X)] =- \int ^{+\infty}_0 f(x) \log f(x) dx, \end{equation}

and

(2)\begin{equation} H(X, Y) = -E_f[\log g(X)] = -\int ^{+\infty}_0 f(x) \log g(x) dx, \end{equation}

respectively, where “log” means the natural logarithm. In the case where $g(x) = f(x)$, Equation (2) reduces to Equation (1).

Recently, many researchers have been considering the importance of inaccuracy measure in information theory. As a result, several generalizations of this measure have been introduced. According to Kerridge [Reference Kerridge14], it is important to consider the measure of inaccuracy for several reasons. When an experimenter provides probabilities for different outcomes, their statement can lack precision in two ways: it may be vague due to insufficient information, or some of the provided probabilities may be incorrect. Statistical estimations and inference problems involve making statements that can be inaccurate in either or both of these ways. The communication theory of Shannon and Weaver [Reference Shannon and Weaver30] offers a framework for dealing with the vagueness aspect of inaccuracy, as demonstrated by authors like Kullback and Leibler [Reference Kullback and Leibler16] and Lindley [Reference Lindley19]. However, this theory has been limited in its ability to address inaccuracy in a broader sense. Kerridge [Reference Kerridge14] argues that the introduction of an inaccuracy measure removes this limitation. He also highlights the duality between information and entropy in communication theory, where uncertainty can be measured by the amount of knowledge needed to achieve certainty. Inaccuracy, therefore, can be seen as a measure of missing information. For more details, refer to [Reference Kayal11Reference Kayal, Sunoj and Rajesh13].

The measure of information and inaccuracy is associated as $H(X, Y) =H(X) + H(X | Y)$, where $H(X| Y)$ represents the Kullback [Reference Kullback15] relative information measure of X about Y, defined as:

(3)\begin{equation} H(X| Y) =\int ^{\infty}_0 f(x) \log \frac{f(x)}{g(x)}dx. \end{equation}

In the fields of reliability, life testing, and survival analysis, it is important to consider the current age of the system being studied. Therefore, when determining the remaining uncertainty of a system that has survived up to a specific time point, the measures described in Equations (1) and (2) are not appropriate. Ebrahimi [Reference Ebrahimi6] considered a random variable $ X_t = (X -t|X \gt t),~ t\geq 0$, and defined uncertainty and discrimination of such a system, given by:

(4)\begin{equation} H(X; t) = -\int^\infty_t \frac{f(x)}{\bar{F}(t)} \log \frac{f(x)}{\bar{F}(t)} dx, \end{equation}

and

(5)\begin{equation} H(X| Y; t) = \int^\infty_t \frac{f(x)}{\bar{F}(t)} \log \frac{f(x) \bar{G}(t)}{g(x)\bar{F}(t)} dx, \end{equation}

respectively. Clearly when t = 0, Equations (4) and (5), respectively, reduce to Equations (1) and (3). Taneja et al. [Reference Taneja, Kumar and Srivastava33] defined a dynamic measure of inaccuracy associated with two residual lifetime distributions F and G corresponding to the Kerridge measure of inaccuracy given by:

\begin{equation*} H(X,Y ; t) = -\int^\infty_t \frac{f(x)}{\bar{F}(t)} \log \frac{g(x)}{\bar{G}(t)} dx. \end{equation*}

Clearly for t = 0, it reduces to Equation (2). Shannon’s measure of uncertainty associated with ith OS $X_{i:n}$ is given by:

(6)\begin{equation} H(X_{i:n})= - \int^\infty_0 f_{i:n}(x) \log f_{i:n}(x) dx, \end{equation}

where

(7)\begin{equation} f_{i:n}(x)=[ B(i, n-i+1) ]^{-1} F^{i-1}(x)\bar{F}^{n-i}(x) f(x), \end{equation}

is the PDF of ith OS, for $i = 1,2, \ldots ,n$. Here

\begin{equation*} B(i, n-i+1) = \frac{\Gamma(n+1)}{\Gamma(n-i+1)\Gamma(i)}, \end{equation*}

is the beta function with parameters i and $n-i+1$, we refer the interested reader to [Reference Arnold, Balakrishnan and Nagaraja2]. Note that for n = 1, Equation (6) reduces to Equation (1).

Recently, Lad et al. [Reference Lad, Sanfilippo and Agró17] proposed an alternative measure of uncertainty of a random variable called extropy. The extropy of the random variable X is defined by Lad et al. [Reference Lad, Sanfilippo and Agró17] to be:

\begin{equation*} J(X) =-\frac{1}{2}\int ^{\infty}_{0} f^2(x) dx = -\frac{1}{2}\int^1_0 {f \big(F ^{-1}( u)\big)}du. \end{equation*}

Extropy is a term coined to represent the opposite of entropy. It refers to the extent of order, organization, and complexity in a system. Extropy is associated with the tendency of systems to increase in complexity, organization, and information. While entropy represents the natural tendency toward disorder and randomness, extropy represents the drive towards order, complexity, and organization. These concepts are often used in different fields, such as physics, information theory, and philosophy, to describe and understand the behavior of systems. The relationship between entropy and extropy can be compared to positive and negative images on a photographic film, as they are related but opposite. Similar to entropy, the maximum extropy occurs when the distribution is uniform. However, they evaluate the refinement of a distribution differently.

Extropy is utilized in scoring forecasting distributions and in speech recognition. One major advantage of extropy is its ease of computation, making it highly valuable for exploring potential applications, such as developing goodness-of-fit tests and inferential methods. Extropy can also be employed to compare the uncertainties associated with two random variables. If we have two random variables X and Y where $J(X) \leq J(Y)$, then it indicates that X has a greater degree of uncertainty compared to Y, in other words, if the extropy of random variable X is lower than the extropy of random variable Y, it implies that X contains more information than Y.

Qiu [Reference Qiu24] derived the characterization results and symmetric properties of the extropy of OS and record values. Kullback [Reference Kullback15] presented the properties of this measure, including the maximum extropy distribution and its statistical applications. Two estimators for the extropy of a continuous random variable were introduced by Qiu and Jia [Reference Qiu and Jia26]. Qiu, Wang, and Wang [Reference Qiu, Wang and Wang27] explored an expression of the extropy of a mixed system’s lifetime. Also, for more details, see [Reference Hashempour and Mohammadi8Reference Hashempour and Mohammadi10, Reference Mohammadi and Hashempour20, Reference Pakdaman and Hashempour22].

The organization of the paper is as follows: In Section 2, we introduce a new method to quantify the discrepancy between the distribution of the ith OS and the parent random variable X. This method is based on a residual measure of inaccuracy known as extropy. Additionally, our study investigates a dynamic residual measure that captures the discrepancy between the distributions of the ith OS and the parent random variable X. We also establish bounds for these measures of inaccuracy. In Section 3, our research focuses on the analysis of the residual inaccuracy of the OS and its implications in terms of characterization results. In Section 4, a nonparametric estimator for the proposed measure is obtained. We evaluate the proposed estimator using a simulation study in Section 5. In Section 6, we consider the real data set to show the behavior of the estimators in real cases.

2. Dynamic residual measure of inaccuracy

In this section, we introduce some new measures of uncertainty based on extropy. Suppose that X and Y are two nonnegative continuous random variables with PDFs f and g, respectively. The measure of uncertainty associated with X and the measure of discrimination of X about Y are, respectively, given by:

(8)\begin{equation} J(X)=-\frac{1}{2} \int^\infty_0 f^2(x) dx, \end{equation}

and according to Equation (3),

(9)\begin{equation} J(X | Y)=\frac{1}{2} \int^\infty_0 f(x) \left[\, f(x)-g(x) \right]dx. \end{equation}

Adding Equations (8) and (9), we obtain:

(10)\begin{equation} J(X, Y)= - \frac{1}{2} \int^\infty_0 f(x) g(x) dx. \end{equation}

If we consider F as the actual CDF, then G can be interpreted as a reference CDF. For calculating the remaining uncertainty of a system, which has survived up to time t, the measures defined in Equations (8)–(10) are not suitable. Qiu and Jia [Reference Qiu and Jia25] considered a random variable $ X_t = [X -t|X \gt t],~ t\geq 0$, and defined the uncertainty of such system based on extropy as:

(11)\begin{equation} J(X; t)=- \frac{1}{2} \int^\infty_t \left[ \frac{f(x)}{\bar{F}(t)} \right]^2 dx. \end{equation}

We define the dynamic measure of inaccuracy associated with two residual lifetime distributions F and G corresponding to the measure of inaccuracy given by:

(12)\begin{equation} J(X,Y ; t) = - \frac{1}{2} \int^\infty_t \frac{f(x)}{\bar{F}(t)} \frac{g(x)}{\bar{G}(t)} dx. \end{equation}

Also, the defined uncertainty discrimination of X about Y are given by:

(13)\begin{equation} J(X| Y; t) = \frac{1}{2} \int^\infty_t \frac{f(x)}{\bar{F}(t)}\left[ \frac{f(x)}{\bar{F}(t)}-\frac{g(x)}{\bar{G}(t)}\right] dx. \end{equation}

Clearly when t = 0, then Equations (11)–(13) reduce, respectively, to Equations (8)–(10). In the following, we study some information theoretic measures based on OS using the probability integral transformation and define extropy and relative information measures.

Theorem 2.1. Suppose that X is a nonnegative continuous random variable with PDF f(x) and CDF F(x). Then, the measure of inaccuracy of the distributions $X_{i:n}$ based on extropy is given by:

\begin{eqnarray*} J(X_{i:n})&=& - \frac{1}{2} \int^\infty_0 f^2_{i:n}(x) dx \\ &=&- \frac{B(2i-1, 2(n-i)+1)}{2B(i, n-i+1)^2} E_{g_{i,2}} \left[ f(F^{-1}( W_{i:n})) \right], \end{eqnarray*}

where $W_{i:n}$ is the ith OS of uniformly distributed random variables $U_1, \ldots, Un$ and

\begin{equation*} g_{i,2}(u)= \frac{u^{2(i-1)}(1-u)^{2(n-i)}}{B(2i-1, 2(n-i)+1)}. \end{equation*}

In the following, we define the measure of inaccuracy the ith OS and the parent random variable.

Definition 2.2. Let X be a nonnegative continuous random variable with PDF f(x) and CDF F(x). Then, we defined the measure of inaccuracy for the ith OS and the parent random variable as:

(14)\begin{eqnarray} J(X_{i:n}, X)&=& -\frac{1}{2} \int^\infty_0 f_{i:n}(x) f(x) dx. \end{eqnarray}

Using Equations (7), we have

\begin{eqnarray*} J(X_{i:n}, X) &=& -\frac{1}{2} \int^\infty_0 \frac{F(x)^{i-1}\bar{F}^{n-i}(x)f^2(x)}{B(i,n-i+1)}dx\nonumber \\ &=& -\frac{1}{2} \int^1_0 \frac{u^{i-1}(1-u)^{n-i}(x)f(F^{-1}(u))}{B(i,n-i+1)}du\nonumber \\ &=& - \frac{1}{2} E_{g_{i,1}} \left[ f(F^{-1}(w_i)) \right], \end{eqnarray*}

where

\begin{equation*} g_{i,1}(w)= \frac{u^{i-1}(1-u)^{n-i}}{B(i, n-i+1)}, \quad 0 \leq w \leq 1, \end{equation*}

is the density function of wi.

Also, we obtain the measure of uncertainty discrimination for the distributions $X_{i:n}$ and X based on extropy as:

\begin{eqnarray*} J(X_{i:n} | X) &=& \frac{1}{2} \int^\infty_0 f_{i:n}(x) \left[\, f_{i:n}(x)-f(x) \right]dx \\ &=& \frac{B(2i-1, 2(n-i)+1)}{2B(i, n-i+1)^2} E_{g_{i,2}} \left[ f(F^{-1}(W_i)) \right] -\frac{1}{2} E_{g_{i,1}} \left[ f(F^{-1}(W_i)) \right]. \end{eqnarray*}

2.1. Dynamic residual measure of inaccuracy for OS

In this section, we propose the dynamic version of inaccuracy measure in Equations (14).

Definition 2.3. The dynamic residual measure of inaccuracy associated with two residual lifetime distributions $F_{i:n}$ and F based on extropy is defined as (DRJOS-inaccuracy measure):

(15)\begin{equation} J(X_{i:n},X ; t) = - \frac{1}{2} \int^\infty_t \frac{f_{i:n}(x)}{\bar{F}_{i:n}(t)} \frac{f(x)}{\bar{F}(t)} dx, \end{equation}

where $\bar{F}_{i:n}(t)=1-F_{i:n}(t)$ is the sf corresponding to $X_{i:n}$ given by:

(16)\begin{equation} \bar{F}_{i:n}(t)=\frac{\bar{B}_{F(t)}(i,n-i+1)}{B(i, n-i+1)}, \end{equation}

here

\begin{equation*} \bar{B}_{F(t)}(i,n-i+1)=\int^1_{F(t)} u^{i-1}(1-u)^{n-i}du, \qquad 0 \lt F(t) \lt 1, \end{equation*}

is the incomplete beta function; for more details, see [Reference David and Nagaraja5].

Note that, when t = 0, Equations (15) reduces to the measure of inaccuracy as defined in Equations (14).

The “DRJOS-inaccuracy” measure can be viewed as a generalization of the idea of extropy. This measure is a useful tool for the measurement of error in experimental results. In fact, the extropy inaccuracy measure can be expressed as the sum of an uncertainty measure and discrimination measure between two distributions. When an experimenter states the probability of various events in an experiment, the statement can lack precision in two ways: one results from incorrect information (e.g. mids-specifying the model) and the other from vagueness in the statement (e.g. missing observation or insufficient data). All estimation and inference problems are concerned with making statements, which may be inaccurate in either or both of these ways. The DRJOS-inaccuracy measure can account for these two types of errors.

This measure has its application in statistical inference and estimation. Also, some concepts in reliability studies for modeling lifetime data such as failure rate and weighted mean past life function can describe using DRJOS-inaccuracy measure. In life time studies, the data is generally truncated. Hence there is scope for extending information theoretic concepts to ordered situations and record values. Motivated by this, we extend the definition of inaccuracy, to the DRJOS-inaccuracy measure. Further, we also look into the problem of characterization of probability distributions using the functional form of these measures. Also, the identification of an appropriate probability distribution for lifetimes is one of the basic problems encountered in reliability theory. Although several methods such as the goodness of fit procedures, probability plots, etc. are available in the literature to find an appropriate model followed by the observations, they fail to provide an exact model. A method to attain this goal can be to utilize an DRJOS-inaccuracy measure.

The DROS-inaccuracy and DRJOS-inaccuracy measures are not competing but are rather complementary. However, the properties of symmetry, finiteness, and simplicity in calculations can be considered as the advantages of DRJOS-inaccuracy measure over DROS-inaccuracy measure. The most important advantage of extropy is that it is easy to compute, and it will therefore be of great interest to explore its important potential applications in developing goodness-of-fit tests and inferential methods.

The inaccuracy and extropy-inaccuracy measures are complementary. The proposed measure is symmetric and has an upper bound (non-positive). Another important advantage of the proposed criterion is that it is easy to calculate. Therefore it will be exciting to investigate its potentially essential applications in the development of goodness-of-fit tests and inferential methods. Some concepts in reliability studies for modeling lifetime data such as failure rate and mean residual life function can be described using extropy-inaccuracy measure. In lifetime studies, the data is generally truncated. Hence there is scope for extending information theoretic concepts to order statistics. Motivated by this, we extend the definition of inaccuracy, to the extropy-inaccuracy measure based on order statistics. Also, the identification of an appropriate probability distribution for lifetimes is one of the basic problems encountered in reliability theory. Although several methods such as probability plots, the goodness of fit procedures, etc., are available in the literature to find an appropriate model followed by the observations, they fail to provide an exact model. A method to attain this goal can be to utilize an extropy-inaccuracy proposed measure.

Figure 1. Graphs of $ J(X_{1:n},X ; t) $ for different values of times (left panel) and sample sizes (right panel) on several values of parameter λ in Example 2.5.

Figure 2. Graphs of $ J(X_{1:n},X ; t) $ for different values of times (left panel) and sample sizes (right panel) on several values of parameter a in Example 2.6.

In the following, we evaluate the residual inaccuracy measure for $ X_{1:n}$ for some specific lifetime distributions, which are applied widely in sf, life testing, and the reliability of system.

Corollary 2.4. In general, for i = 1, we get:

\begin{equation*} J(X_{1:n},X ; t) =- \frac{n}{2\bar{F}^{n+1}(t)}\int^\infty_t \bar{F}^{n-1}(x)f^2 (x)dx. \end{equation*}

Example 2.5. Let the random variable X be exponentially distributed with PDF $f (x) =\lambda \exp\{-\lambda x\}$ and CDF $F(x) =1- \exp\{-\lambda x\},~ \lambda \gt 0$. Note that, for i = 1, that is, the case of sample minima, after some algebraic manipulations, we have:

\begin{eqnarray*} J(X_{1:n},X ; t) = - \frac{n \lambda}{2(n+1)}. \end{eqnarray*}

The left panel of Figure 1 shows that $ J(X_{1:n},X ; t) $ is constant for different values of times (t) on a fixed value of n. The right panel of Figure 1 shows that $ J(X_{1:n},X ; t) $ tends to be $ - \frac{\lambda}{2} $ with increasing sample size n. Also, we can observe that the inaccuracy of sample minimum is decreasing with respect to parameter λ.

Example 2.6. Assume that the random variable X is a random variable from the beta distribution with PDF $f (x) =a(1-x)^{a-1},~ a \gt 1$, and CDF $ F(x) =(1-x)^a, ~ 0 \lt x \lt 1$. We obtain

\begin{eqnarray*} J(X_{1:n},X ; t) = -\frac{na^2}{2a(n+1)-2}(1-t)^{-1}. \end{eqnarray*}

Figure 2 shows a decrease in inaccuracy for different values of a. The left panel of Figure 2 shows that $ J(X_{1:n}, X; t) $ decreases with increasing time (t) for a fixed value of n. The right panel of Figure 2 shows that $ J(X_{1:n},X ; t) $ tends to be $ \frac{a}{2(t-1)} $ with increasing sample size n.

Example 2.7. Assume that X is a random variable from uniform distribution over $[0, b]$. Then we verify that:

\begin{eqnarray*} J(X_{1:n},X ; t) = -\frac{1}{2(b-t)}, \qquad t \lt b. \end{eqnarray*}

The left panel of Figure 3 shows that $ J(X_{1:n},X ; t) $ is nonincreasing with respect to time (t). The right panel of Figure 3 shows that $ J(X_{1:n},X ; t) $ is constant for different values of sample size n. Also, we can observe that the inaccuracy of sample minimum is increasing with respect to parameter b.

Theorem 2.8. Let $M =f (m) \lt \infty $, where $m = \sup\{x; f (x) \leq M\}$ is the mode of the distribution. Then

\begin{eqnarray*} - M^2 Q(t) \leq J(X_{i:n}, X) \leq 0, \end{eqnarray*}

Figure 3. Graphs of $ J(X_{1:n},X ; t) $ for different values of times (left panel) and sample sizes (right panel) on several values of parameter b in Example 2.7.

where $Q(t)= \frac{\int_t^\infty f_{i:n}(x)dx}{\bar{F}_{i:n}(t)\bar{F}(t)}.$

Proof. From Equations (15), we have

\begin{eqnarray*} J(X_{i:n},X ; t) &=& - \frac{1}{2} \int^\infty_t \frac{f_{i:n}(x)}{\bar{F}_{i:n}(t)} \frac{f(x)}{\bar{F}(t)} dx \\ &=&- \frac{1}{2B(i,n-i+1)} \int^\infty_{t} F^{i-1}(x)(1-F(x))^{n-i}f^2(x) dx, \\ &\geq& - \frac{M^2}{2 B(i, n-i+1)} \int_t^\infty F^{i-1}(x) \bar{F}^{n-i}(x)dx \\ &\geq& - \frac{M^2}{2\bar{F}_{i:n}(t)\bar{F}(t)}\int_t^\infty f_{i:n}(x)dx. \end{eqnarray*}

The proof is completed.

In the following, we express a lower bound for $J(X_{i:n},f ; t)$ in terms of the extropy.

Proposition 2.9. A lower bound for the dynamic measure of inaccuracy between the distributions $X_{i:n}$ and X based on extropy is obtained by:

\begin{eqnarray*} D(t) J(X_{i:n},X) \leq J(X_{i:n},X ; t), \end{eqnarray*}

where $D(t)=[ \bar{F}_{i:n}(t) \bar{F}(t)]^{-1}.$

Proof. We have

\begin{eqnarray*} J(X_{i:n},X ; t) &=& - \frac{1}{2} \int^\infty_t \frac{f_{i:n}(x)}{\bar{F}_{i:n}(t)} \frac{f(x)}{\bar{F}(t)} dx \\ &\geq & - \frac{1}{2} \int^\infty_0 \frac{f_{i:n}(x)}{\bar{F}_{i:n}(t)} \frac{f(x)}{\bar{F}(t)} dx \\ &=&\frac{1}{\bar{F}_{i:n}(t) \bar{F}(t)} J(X_{i:n},X ; t). \end{eqnarray*}

In what follows, we will investigate the relationship between $J(X_{i:n},X ; t) $ and $J(X_{i:n},X)$.

Corollary 2.10. Suppose that X is a nonnegative continuous random variable with PDF f(x) and CDF F(x). Then,

\begin{eqnarray*} J(X_{i:n},X ; t) = A(t) J(X_{i:n},X) -C(t), \end{eqnarray*}

where $A(t)=[ \bar{F}_{i:n}(t) \bar{F}(t)]^{-1}$ and $C(t)=- \frac{1}{2} \int^t_0 \frac{f_{i:n}(x)f(x)}{\bar{F}_{i:n}(t)\bar{F}(t)} dx$.

Proof. The proof is obtained from the following equation:

\begin{eqnarray*} J(X_{i:n},X ; t) &=& - \frac{1}{2} \int^\infty_t \frac{f_{i:n}(x)}{\bar{F}_{i:n}(t)} \frac{f(x)}{\bar{F}(t)} dx \\ &=& - \frac{1}{2} \left( \int^\infty_0 \frac{f_{i:n}(x)}{\bar{F}_{i:n}(t)} \frac{f(x)}{\bar{F}(t)} dx - \int^t_0 \frac{f_{i:n}(x)}{\bar{F}_{i:n}(t)} \frac{f(x)}{\bar{F}(t)} dx \right)\\ &=&\frac{1}{\bar{F}_{i:n}(t) \bar{F}(t)} J(X_{i:n},X)+ \frac{1}{2} \int^t_0 \frac{f_{i:n}(x)}{\bar{F}_{i:n}(t)} \frac{f(x)}{\bar{F}(t)} dx. \end{eqnarray*}

2.2. Stochastic order

We want to prove a property of dynamic inaccuracy measure using some properties of stochastic ordering. We present the following definitions:

(I) A random variable X is said to be less than Y in the stochastic ordering (denoted by $X \leq_{st} Y)$ if $\bar{F} (x) \leq \bar{G}(x)$ for all x, where $\bar{F} (x)$ and $\bar{G}(x)$ are the reliability functions of X and Y, respectively.

(II) A random variable X is said to be less than Y in likelihood ratio ordering (denoted by $X \leq_{lr} Y)$ if $f_X (x) /g_Y (x)$ is nonincreasing in x.

Theorem 2.11. Suppose that $X_1, \ldots, X_n$ are i.i.d. nonnegative random variables representing the life length of a series system. If $f(\cdot)$ is decreasing in its support, then the corresponding inaccuracy is the increasing function of n.

Proof. Let the random variable $Y=\{X_{i:n}|X_{i:n} \gt t\}$ have PDF $ g_i(y)=\frac{y^{i-1}(1-y)^{n-1}}{\bar{B}_{F(t)}(i, n)} $, where $F(t) \leq y \leq 1$ and $\bar{B}_{F(t)}(c, d)=\int^1_{F(t)} x^{c-1}(1-x)^{d-1}dx$ is the incomplete beta function. As f is decreasing in its support for i = 1 (that is, for a series system), hence

\begin{equation*} \frac{g_{n+1}(x)}{g_n(x)}=\frac{\bar{B}_{F(t)}(1, n)}{\bar{B}_{F(t)}(1, n+1)} (1-y), \quad F(t)\leq y \leq 1, \end{equation*}

is a decreasing function. This implies that $X_{n+1} \leq_{lr} X_n$, which implies $X_{n+1}\leq_{st} X_n$; for more details, see [Reference Shaked and Shanthikumar29]. Also, it is given that $f (F^{-1}(x))$ is the decreasing function of x. Therefore, for i = 1, $E_{g_1}\left( f(F^{-1}(u_n) \right) \geq E_{g_1}\left( f(F^{-1}(u_{n+1}) \right).$ Also, it follows from Equations (15) that the dynamic residual inaccuracy of the ith OS is:

\begin{eqnarray*} J(X_{i:n},X ; t) &=& -\frac{1}{2\bar{F}_{i;n} (t) \bar{F} (t)} \int^\infty_t f_{i:n}(x) f(x)dx \nonumber \\ &=& -\frac{1}{2\bar{F}_{i;n} (t) \bar{F} (t) B(i, n-i+1)} \int^\infty_t F^{i-1}(x) \bar{F}^{n-i}(x) f^2(x)dx \nonumber \\ &=& -\frac{1}{2\bar{F}_{i;n} (t) \bar{F} (t)} \int^1_{F(t)} \frac{u^{i-1}(1-u)^{n-i}}{B(i, n-i+1)} f(F^{-1}(u))du \nonumber \\ &=& -\frac{1}{2\bar{F}_{i;n} (t) \bar{F} (t)} E_{\bar{F}_{i:n}(t)} \left( f(F^{-1}(u)) \right)\nonumber \\ &=& -\frac{B(i, n-i+1)}{2\bar{F}(t) \bar{B}_{F(t)}(i, n-i+1)} E_{\bar{F}_{i:n}(t)} \left( f(F^{-1}(u)) \right). \end{eqnarray*}

Similarly,

\begin{eqnarray*} J(X_{i:n+1},X ; t) &=& -\frac{1}{2\bar{F}_{i;n+1} (t) \bar{F} (t)} E_{\bar{F}_{i:n+1}(t)} \left( f(F^{-1}(u)) \right)\nonumber \\ &=& -\frac{B(i, n-i+2)}{2\bar{F}(t) \bar{B}_{F(t)}(i, n-i+2)} E_{\bar{F}_{i:n+1}(t)} \left( f(F^{-1}(u)) \right), \end{eqnarray*}

using the probability integral transformation, $F (X)=U$. Hence, for i = 1 and $n \geq 1$, we have:

\begin{eqnarray*} J(X_{i:n},X ; t)- J(X_{i:n+1},X ; t) &=& -\frac{B(i, n-i+2)}{2\bar{F}(t) \bar{B}_{F(t)}(i, n-i+2)} E_{\bar{F}_{i:n+1}(t)} \left( f(F^{-1}(u)) \right) \\ &&+\frac{B(i, n-i+1)}{2\bar{F}(t) \bar{B}_{F(t)}(i, n-i+1)} E_{\bar{F}_{i:n}(t)} \left( f(F^{-1}(u)) \right) \leq 0. \end{eqnarray*}

3. Some results on characterization

In this section, we demonstrate that the measure of dynamic residual inaccuracy of OS can also determine the underlying distribution uniquely. The subject of characterizing the underlying distribution of a sample based on measures like extropy or its generalized versions of OS has been explored by a number of authors in recent studies. Characterization property on the measure of dynamic residual inaccuracy between the ith OS and parent random OS is studied by using the sufficient condition for the uniqueness of the solution of initial value problem given by $dy/dx=f(x,y), ~ y(x_0)= y_0$, where f is a function of two variables whose domain is a region $S \subset R^2,~ (x_0, y_0)$ is a point in S, and y is an unknown function. By the solution of the initial value problem on an interval $L \subset R$, we mean a function $\eta(x)$ such that:

(i) η is differentiable on L,

(ii) the growth of η lies in S,

(iii) $\eta(x_0) = y_0$, and

(iv) $\eta'(x) = f (x, \eta(x))$, for all $x \in L$.

The following proposition together with other results will help in proving our characterization result.

Proposition 3.1. Let f be a continuous function defined in a domain $S \subset R^2$ and let $| f (x, y_1) - f (x, y_2)| \leq k|y_1- y_2|, k \gt 0,$ for every point $(x, y_1)$ and $(x, y_2)$ in S; that is, f satisfies the Lipschitz condition in S. Then, the function $y = \eta(x)$ satisfying the initial value problem $y' = f (x, y)$ and $\eta(x_0) = y_0,~ x \in L$, is unique.

We will utilize the lemma provided by Gupta and Kirmani [Reference Gupta and Kirmani7] to present a condition that is sufficient to guarantee the fulfillment of the Lipschitz condition within the set S.

Lemma 3.2. Suppose that f is a continuous function in a convex region $S \subset R^2$. Assume that $\partial f/ \partial y$ exists and it is continuous in S. Then, f satisfies the Lipschitz condition in S.

Theorem 3.3. Assume that X is a nonnegative continuous random variable with CDF F. Suppose that $J(X_{i:n}, X ; t)$ is the dynamic residual inaccuracy of the ith OS based on a random sample of size n. Then $J(X_{i:n},X ; t)$ characterizes the distribution.

Proof. We have $J(X_{i:n},X ; t) = -\frac{1}{2\bar{F}_{i;n} (t) \bar{F} (t)} \int^\infty_t f_{i:n}(x) f(x)dx$. Taking derivative of both sides with respect to t, we have:

\begin{eqnarray*} \frac{d}{dt}J(X_{i:n},X ; t)= \frac{1}{2}r_{F_{i:n}}(t)r_{F}(t)+ J(X_{i:n},X ; t) \left[ r_{F}(t)- r_{F_{i:n}}(t) \right], \end{eqnarray*}

where $r_F(t)$ and $r_{F_{i:n}}(t)$ are the hazard rates (HRs) of X and $X_{i:n}$, respectively. Again, taking derivative with respect to t and using the relation $r_{F_{i:n}}(t)= k(t) r_F(t) $, we have:

\begin{equation*} k(t)=\frac{F^{i-1}(t) \bar{F}^{n-i+1}(t)}{\bar{B}_{F(t)}(i, n-i+1)}r_F(t). \end{equation*}

After some algebraic manipulations, we have:

(17)\begin{align} r^{'}_F(t)=- \frac{\frac{1}{2} r^2_F(t) k^{'}(t)+r_{F}(t)J^{'}(X_{i:n},X ; t)+ J(X_{i:n},X ; t) r_{F}(t)k^{'}(t)+k(t) r_{F}(t) J^{'}(X_{i:n},X ; t) } { r_{F}(t) k(t)+ J(X_{i:n},X ; t)+ J(X_{i:n},X ; t) k(t)}. \end{align}

Suppose that there are two functions F and $F^*$ such that $J(X_{i:n},f ; t)=J(X^*_{i:n},X ; t)=r_{F}(t)$. Then, for all t, we get from Equations (17) that $ r^{'}_F(t)= \xi (t, r_{F}(t) )$ and $ r^{'}_{F^*}(t)= \xi (t, r_{F^*}(t) )$, where

\begin{eqnarray*} \xi (t, y)=- \frac{\frac{1}{2}y^2 k^{'}(t)+y r^{'}_F(t)+ r_{F}(t) y k^{'}(t)+ k(t) y r^{'}_{F}(t)} {k(t) y+ r_{F}(t)+ k(t) r_{F}(t)}. \end{eqnarray*}

By using Lemma 3.2 and Proposition 3.1, we have $r_{F^*}(t)=r_F(t) $, for all t. Using the fact that the HRF characterizes the distribution function uniquely, we get the desired result.

In the following, we characterize some specific life length distributions.

Theorem 3.4. Suppose that X is a nonnegative continuous random variable with CDF F. Relation between dynamic residual inaccuracy of the $X_{1:n}$ and HRF is given by:

\begin{equation*} J(X_{1:n},X ; t)= -k ~ r_F(t), \end{equation*}

where k is a constant. Then X has

  1. I) an exponential distribution if and only if $ k = \frac{n}{2(n+1)} $,

  1. II) a finite range distribution if and only if $ k \gt \frac{n}{2(n+1)} $,

  1. III) a Pareto distribution if and only if $k \lt \frac{n}{2(n+1)} $.

Proof. We consider sufficiency. Let us assume that:

\begin{equation*} J(X_{1:n},X ; t)= -k~ r_F(t), \end{equation*}

Taking derivative with respect to t on both sides of the above equation, we have:

(18)\begin{eqnarray} \frac{\partial}{\partial t}\left( J(X_{1:n},X ; t) \right)= \frac{1}{2} r_F(t)r_{F{i:n}}(t)+ r_F(t) J(X_{1:n},X ; t)+ r_{F{i:n}}(t) J(X_{1:n},X ; t), \end{eqnarray}

where $r_{F{i:n}}(t)$ and $r_F(t)$ are the HRs of $X_{1:n}$ and X, respectively. It is easy to see that $r_{F{i:n}}(t)=n r_F(t)$. Putting the value of $r_{F{i:n}}(t)$, Equations (18) reduces to:

(19)\begin{equation} \frac{\partial}{\partial t}\left( J(X_{1:n},X ; t) \right)= \frac{n}{2}r^2_F(t)+(n+1) r_{F}(t) J(X_{1:n},X ; t). \end{equation}

Using $J(X_{1:n},X ; t)=-k(t) r_{F}(t)$ in Equations (19), we get:

\begin{equation*} \frac{r^{'}_F(t)}{r^2_F(t)}= - \frac{n-2k(n+1)}{2k}, \qquad t \geq 0. \end{equation*}

Solving this equation yields

(20)\begin{equation} r_{F}(t)= \frac{1}{q+pt},\qquad t \geq 0, \end{equation}

where $p=\frac{1}{q+pt} t^{-1}$ and $q=r^{-1}_F(0).$

  1. I) If $k = \frac{n}{2(n+1)}$, then p = 0, and $r_{F}(t)$ turns out to be a constant, which is just the condition under which X has an exponential distribution.

  2. II) If $k \gt \frac{n}{2(n+1)}$, then p < 0, and Equations (20) becomes the HRF of the finite range distribution.

  3. III) If $k \lt \frac{n}{2(n+1)}$, then p > 0, which is just the condition under which X has a Pareto distribution.

In the following, the necessities of Parts (1)–(3) can be verified by using examples in section 2. This completes the proof by noting that the CDF is determined uniquely by its failure rate.

Corollary 3.5. It follows from Equation (19) that $J(X_{1:n},X ; t)$ is decreasing (increasing) in t if and only if:

\begin{equation*} J(X_{1:n},X ; t) \leq (\geq) -\frac{n }{2(n+1)}\lambda. \end{equation*}

4. Nonparametric estimation

In this section, we propose a nonparametric estimation of $ J(X_{i:n}, X; t) $. Assume that $X_{1},\ldots,X_{n}$ is a random sample obtained from a population, where $ f(\cdot)$ and $F(\cdot)$ are its PDF and CDF, respectively. Following that, a nonparametric estimator of the dynamic extropy-based measure of residual inaccuracy between the distributions $X_{i:n}$ and X can be obtained by:

(21)\begin{align} \widehat{J}(X_{i:n},X ; t) &= - \frac{1}{2} \int^\infty_t \frac{\widehat{f}_{i:n}(x)}{\widehat{\bar{F}}_{i:n}(t)} \frac{\widehat{f}(x)}{\widehat{\bar{F}}(t)} dx, \end{align}

where $ \widehat{\bar{F}}(\cdot)=1- \widehat{F}(\cdot) $ and $ \widehat{f}(\cdot) $ are the estimations of $ \bar{F}(\cdot)=1-F(\cdot) $ and $ f(\cdot) $, respectively. Also, $ \widehat{f}_{i:n}(\cdot)$ and $ \widehat{\bar{F}}_{i:n}(\cdot) $ can be obtained by replacing $ \widehat{\bar{F}}(\cdot)$ and $ \widehat{f}(\cdot) $ in Equations (7) and (16).

Now, we consider kernel methods for the estimation of PDF $ f(\cdot)$ and CDF $F(\cdot)$ to use Equation (21). The kernel method for the estimation of PDF $ f(\cdot)$ was defined by Silverman [Reference Silverman32] as:

(22)\begin{equation} \widehat{f}_{h_f}(x)= \frac{1}{nh_n} \sum_{i=1}^{n} K(\frac{x-X_i}{h_f}), \qquad x\in \mathbb{R}, \end{equation}

where hf is a bandwidth or smoothing parameter and $K(\cdot)$ is a kernel function.

Some commonly used kernels are normal (Gaussian), Epanechnikov, and tricube. The asymptotic relative efficiencies of a kernel defined by Silverman [Reference Silverman32, p. 43] show that there is not much difference among the kernels if the mean integrated squared error criterion is used. Also, estimates obtained by using different kernels are usually numerically very similar. So, the choice of kernel K is not too important and the standard normal density function is used as the kernel function $K(\cdot)$ in Equation (22). What does matter much more is the choice of bandwidth which controls the amount of smoothing. Small bandwidths give very rough estimates while larger bandwidths give smoother estimates. Therefore, we only focus on the selection of the bandwidth parameter.

Minimizing the mean integrated squared error (MISE) defined as $E\Big( \int \big(\,\widehat{f}_{h_f}(x)-f(x)\big)^2 dx \Big)$ is a common method for bandwidth selection. Normal reference (NR) and cross-validation (CV) methods are two common methods of bandwidth selection in kernel PDF estimation based on minimizing the MISE. Under the assumption that the data density is normal, the best hf by minimizing MISE called the NR or rule of thumb method yields:

\begin{equation*} h_f^{NR}=1.06 \sigma n^{-1/5}, \end{equation*}

where σ is estimated by $\min\{S, Q/1.34\}$ in which S is the sample standard deviation and Q is the interquartile range. The CV is another method for bandwidth selection. The leave-one-out CV method for bandwidth selection can be considered as:

\begin{equation*} h_f^{CV}= \arg \min_{h_n} E \Big[ \int \widehat{f}_{h_n}^{\ 2} (x) dx - \frac{2}{n} \sum_{i=1}^{n} \widehat{f}_{h_f,-i}(X_i) \Big], \end{equation*}

where $ \widehat{f}_{h_f,-i}(X_i) $ denotes the kernel estimator obtained by omitting Xi.

The NR method is found under the assumption that the underlying density is normal. When the data is not normal, they still provide reasonable bandwidth choices. However, the CV bandwidth selection method is data-driven rather than dependent on the assumption of normality. There is no simple and universal answer to the question that which bandwidth selector is the most adequate for a given dataset. Trying several selectors and inspecting the results may help to determine which one is estimating the density better. However, there are a series of useful facts and suggestions. The NR method is a quick, simple, and inexpensive bandwidth selector. However, it tends to give bandwidths that are too large for non-normal-like data. Also, the CV method may be better suited for highly non-normal and rough densities, in which NR method may end up over-smoothing.

For CDF estimation, kernel and empirical methods are the two main approaches. The empirical method will be a step function even in the case that the CDF is a continuous function, and so, it has less accurate than the kernel method; see [Reference Nadaraya21]. The kernel estimation of CDF was proposed by Nadaraya [Reference Nadaraya21] as:

\begin{equation*} \widehat{F}_{h_F}(x)= \frac{1}{n} \sum_{i=1}^{n} W(\frac{x-X_i}{h_F}), \end{equation*}

where hF is a bandwidth or smoothness parameter and $W(x) =\int_{-\infty}^{x} K(t)dt$ is a CDF of a positive kernel function $K(\cdot)$. When applying $ \widehat{F}_{h_F} $, one needs to choose the kernel and the bandwidth. It was shown by Lejeune and Sarda [Reference Lejeune and Sarda18], that the choice of the kernel is less important than the choice of the bandwidth for the performance of the estimation of CDF. In general, the idea underlying bandwidth selection is the minimization of the MISE, defined as:

(23)\begin{equation} MISE(h_F)= E\Big[ \int_{-\infty}^{+\infty} \big(\widehat{F}_{h_F} (x)- F(x)\big)^2 dx \Big]. \end{equation}

We focus on the bandwidth selection-based plug-in (PI) and CV approaches. In the PI approach, bandwidth is selected by minimizing an asymptotic approximation of MISE. In this paper, we use the PI approach provided by Polansky and Baker [Reference Polansky and Baker23], which developed the previous ideas by Altman and Leger [Reference Altman and Leger1]. They showed that $ h_F^{PI}=\widehat{C} n^{-1/3} $, where $\widehat{C}$ is estimated through the data sample. A well-known method on the bandwidth selection for the CDF estimation is the CV method that is initially proposed by Sarda [Reference Sarda28]. Altman and Leger [Reference Altman and Leger1] showed that this method basically requires large sample sizes to ensure good results. Therefore, Bowman et al. [Reference Bowman, Hall and Prvan3] proposed a modified version of the CV method that is asymptotically optimal and works well in simulation studies and real cases. Here, we use the CV approach proposed by Bowman et al. [Reference Bowman, Hall and Prvan3]. They considered a CV bandwidth selection as:

\begin{equation*} h_F^{CV}= \arg \min_{h_F} \frac{1}{n} \sum_{i=1}^{n} \int_{-\infty}^{\infty} \big(I(x-x_i\geq 0)-\widehat{F}_{h_F,-i}(x) \big)^2 dx, \end{equation*}

where $ F_{h_F,-i}(X_i) $ denotes the kernel estimator constructed from the data with observation Xi omitted. Bowman et al. [Reference Bowman, Hall and Prvan3] performed a simulation study comparing the CV method with the PI method. Their study showed that the PI method did not behave well in simulation studies and better results were obtained, in general, with CV method. A drawback of the CV method is the weak performance in terms of computational time in simulation study because this method involves the minimization of a function of n 2 terms that is necessary to evaluate over a large enough grid of bandwidths. Obviously, this is not really a drawback for a real data situation, because the minimization process is carried out only once.

5. Simulation study

In this section, we evaluate the accuracy of the nonparametric estimation of $ J(X_{i:n}, X; t) $ in Equation (21) using a simulation study. We use a Monte Carlo simulation study for comparing the proposed estimators in terms of the absolute bias (AB) and root mean square error (RMSE). For the estimation of $ J(X_{i:n}, X; t) $, we generate random samples from Exponential distribution in Example 2.5 with parameter $ \lambda=0.1, 0.2, 0.5 $, Beta distribution in Example 2.6 with parameter $ a=2, 3, 5 $, and Uniform distribution in Example 2.7 with parameter $ b=5, 10, 20 $. Also, we considered different times (t), orders ($ k=1,5,10 $), and sample sizes ($ n=50, 200 $) for each of these distributions. The kernel estimations of PDF by bandwidth selection using the NR method and the CV method are indicated by $ \widehat{f}_h^{NR} $ and $ \widehat{f}_h^{CV} $, respectively. Also, the kernel estimations of CDF by bandwidth selection using the PI method and the CV method are indicated by $ \widehat{F}_h^{PI} $ and $ \widehat{F}_h^{CV} $, respectively. The estimated values of AB and RMSE are reported in Tables 16.

Table 1. Estimation of AB and RMSE of $ \widehat{J}(X_{i:n}, X; t) $ for Exponential distribution with mean $ 1/\lambda $ on sample size n = 50

Table 2. Estimation of AB and RMSE of $ \widehat{J}(X_{i:n}, X; t) $ for Exponential distribution with mean $ 1/\lambda $ on sample size n = 200

Table 3. Estimation of AB and RMSE of $ \widehat{J}(X_{i:n}, X; t) $ for Beta distribution in Example 2.6 on sample size n = 50

Table 4. Estimation of AB and RMSE of $ \widehat{J}(X_{i:n}, X; t) $ for beta distribution in Example 2.6 on sample size n = 200

Table 5. Estimation of AB and RMSE of $ \widehat{J}(X_{i:n}, X; t) $ for Uniform distribution in Example 2.7 on sample size n = 50

Table 6. Estimation of AB and RMSE of $ \widehat{J}(X_{i:n}, X; t) $ for Uniform distribution in Example 2.7 on sample size n = 200

We consider four methods to estimate $ J(X_{i:n}, X; t) $ based on the type of bandwidth selection as follows:

  • 1- bandwidth selection for the estimations of PDF with NR method and CDF with PI method, denoted by $(\,\widehat{f}_h^{NR}, \widehat{F}_h^{PI})$,

  • 2- bandwidth selection for the estimations of PDF with CV method and CDF with PI method, denoted by $(\,\widehat{f}_h^{CV}, \widehat{F}_h^{PI})$,

  • 3- bandwidth selection for the estimations of PDF with NR method and CDF with CV method, denoted by $(\,\widehat{f}_h^{NR}, \widehat{F}_h^{CV})$,

  • 4- bandwidth selection for the estimations of PDF and CDF with CV method, denoted by $(\,\widehat{f}_h^{CV}, \widehat{F}_h^{CV})$.

The simulation results in Tables 16 show that the estimation of $ J(X_{i:n}, X; t) $ with bandwidth selection using the CV method for the kernel PDF and CDF estimations, that is, $(\widehat{f}_h^{CV}, \widehat{F}_h^{CV})$, has the best performance. In general, the kernel estimation of PDF in $ J(X_{i:n}, X; t) $ using the CV method is more accurate than the NR method. The estimated AB and RMSE of the proposed estimators decrease as the sample size increases. In general, the estimated AB and RMSE of $ \widehat{J}(X_{i:n}, X; t) $ decrease by increasing time (t) or order (k). Also, the estimated AB and RMSE of $ \widehat{J}(X_{i:n}, X; t) $ increase with the increase of the considered parameters for three distributions.

The CV method for bandwidth selection is data-driven rather than dependent on the assumption of normality. Since the lifetime distributions considered in the simulation study do not have a normal distribution, it was expected that the CV method for bandwidth selection would perform well in estimating the PDF and CDF. This result was also obtained from the comparison of AB and RMSE of the proposed estimators of $ \widehat{J}(X_{i:n}, X; t) $ in Tables 16.

6. Real data

In this section, we consider a real data set to show the behavior of the estimators in real cases and illustrate the application of the suggested measure for a model selection. We consider the following data set from [Reference Chowdhury, Mukherjee and Nanda4] on the number of casualties in n = 44 different plane crashes:

3, 77, 9, 6, 14, 6, 23, 32, 18, 7, 27, 22, 10, 47, 9, 85, 7, 16, 80, 2, 8, 38, 11, 12, 4, 21, 8, 44, 30, 3, 2, 19, 18, 2, 28, 8, 1, 5, 8, 1, 3, 5, 4, 3.

In Table 7, the values of the Log-Likelihood, Akaike information criterion (AIC), and Bayesian information criterion (BIC), as well as the Kolmogorov–Smirnov (K-S) goodness of fit test, are presented for choosing the best model among exponential, Weibull, Log-Normal, Gamma, and Log-Logistic distributions. The results of this table show that the Log-Normal distribution is closer to the real data distribution. The maximum likelihood estimation of the location and scale parameters of the Log-Normal distribution are 2.30 and 1.12, respectively.

Table 7. Model selection criteria for the number of casualties data

Figure 4. Estimation of $ J(X_{1:n},X ; t) $ for the number of casualties data

In Figure 4, the nonparametric estimation of $ J(X_{1:n},X ; t) $ using Equation (21) is plotted for different values of time (t). For this estimation, we use the CV method for bandwidth selection in the kernel estimations of PDF and CDF. Also, in this figure, the theoretical values of $ J(X_{1:n},X ; t) $ are drawn based on exponential, Weibull, log-normal, gamma, and log-logistic distributions. It can be seen that the nonparametric estimation of the $ J(X_{1:n},X ; t) $ is close to its theoretical value based on the log-normal distribution. Therefore, the log-normal distribution is a better choice than other distributions, which is consistent with the results of Table 7 based on AIC and BIC criteria.

7. Conclusion

This paper introduced a fresh approach to measuring the residual inaccuracy of OS. Additionally, we investigated some different lifetime distributions by analyzing the residual inaccuracy of the $X_{1:n}$ statistic. Furthermore, we explored various properties associated with this new measure. Our study also focuses on examining the dynamic measure of inaccuracy for both the first and ith OS, demonstrating their ability to accurately determine the distribution function in a unique manner. The nonparametric kernel estimation of $ J(X_{1:n},X ; t) $ was provided. The NR and CV methods to select the bandwidth in PDF kernel estimation and PI and CV methods to select the bandwidth in CDF kernel estimation were considered. The simulation results showed that the estimation of $ J(X_{i:n}, X; t) $ with bandwidth selection using the CV method for the kernel PDF and CDF estimations has the best performance. Finally, an application was given to demonstrate how the suggested measure can be applied in model selection.

Acknowledgements

The authors thank the editor-in-chief, the associate editor, and the anonymous reviewers for their useful comments on the earlier version of this paper.

Funding statement

This research received no external funding.

References

Altman, N. & Leger, C. (1995). Bandwidth selection for kernel distribution function estimation. Journal of Statistical Planning and Inference 46(2): 195214.CrossRefGoogle Scholar
Arnold, B.C., Balakrishnan, N. & Nagaraja, H.N. (2008). A First Course in Order Statistics, Classic Edn. SIAM: Philadelphia.CrossRefGoogle Scholar
Bowman, A., Hall, P. & Prvan, T. (1998). Bandwidth selection for the smoothing of distribution functions. Biometrika 85(4): 799808.CrossRefGoogle Scholar
Chowdhury, S., Mukherjee, A. & Nanda, A.K. (2017). On compounded geometric distributions and their applications. Communications in Statistics-Simulation and Computation 46(3): 17151734.CrossRefGoogle Scholar
David, H.A. & Nagaraja, H.N. (2003). Order Statistics, 3rd edn. New York: John Wiley & Sons.CrossRefGoogle Scholar
Ebrahimi, N. (1996). How to measure uncertainty in the residual life distributions. Sankhya: The Indian Journal of statistics 58(1): 4857.Google Scholar
Gupta, R.C. & Kirmani, S.N.U.A. (2008). Characterizations based on conditional mean function. Journal of Statistical Planning and Inference 138(4): 964970.CrossRefGoogle Scholar
Hashempour, M. & Mohammadi, M. (2022). On dynamic cumulative past inaccuracy measure based on extropy. Communications in Statistics-Theory and Methods: 118. DOI: 10.1080/03610926.2022.2098335Google Scholar
Hashempour, M. & Mohammadi, M. (2023). Extropy-based inaccuracy measure in order statistics. Statistics: 119. DOI: 10.1080/02331888.2023.2273505Google Scholar
Hashempour, M. & Mohammadi, M. (2023). A new measure of inaccuracy for record statistics based on extropy. Probability in the Engineering and Informational Sciences: 119. DOI: 10.1017/S0269964823000086CrossRefGoogle Scholar
Kayal, S. (2017). Quantile-based cumulative inaccuracy measures. Physica A: Statistical Mechanics and its Applications 510: 329344.CrossRefGoogle Scholar
Kayal, S. & Sunoj, S.M. (2005a). Generalized Kerridge’s inaccuracy measure for conditionally specified models. Communications in Statistics – Theory and Methods. 46(16): 82578268.CrossRefGoogle Scholar
Kayal, S., Sunoj, S.M. & Rajesh, G. (2017). On dynamic generalized measures of inaccuracy. Statistica. 77(2): 133148.Google Scholar
Kerridge, D.F. (1961). Inaccuracy and inference. Journal of the Royal Statistical Society Series B: 23(1): 184194.Google Scholar
Kullback, S. (1959). Information Theory and Statistics. New York: Wiley.Google Scholar
Kullback, S. & Leibler, R.A. (1951). On information and sufficiency. The Annals of Mathematical Statistics 22(1): 7686.CrossRefGoogle Scholar
Lad, F., Sanfilippo, G. & Agró, G. (2015). Extropy: complementary dual of entropy. Statistical Science 30(1): 4058.CrossRefGoogle Scholar
Lejeune, M. & Sarda, P. (1992). Smooth estimators of distribution and density functions. Computational Statistics and Data Analysis 14(4): 457471.CrossRefGoogle Scholar
Lindley, D.V. (1957). Binomial sampling and the concept of information. Biometrika 44(1): 179186.CrossRefGoogle Scholar
Mohammadi, M. & Hashempour, M. (2022). On interval weighted cumulative residual and past extropies. Statistics 56(5): 10291047.CrossRefGoogle Scholar
Nadaraya, E.A. (1964). On estimating regression. Theory of Probability and its Applications 9(1): 141142.CrossRefGoogle Scholar
Pakdaman, Z. & Hashempour, M. (2019). Mixture representations of the extropy of conditional mixed systems and their information properties. Iranian Journal of Science and Technology, Transactions A: Science 45(3): 10571064.CrossRefGoogle Scholar
Polansky, A.M. & Baker, E.R. (2000). Multistage plug-in bandwidth selection for kernel distribution function estimates. Journal of Statistical Computation and Simulation 65(1): 6380.CrossRefGoogle Scholar
Qiu, G. (2017). The extropy of order statistics and record values. Statistics and Probability Letters 120: 5260.CrossRefGoogle Scholar
Qiu, G. & Jia, K. (2018a). The Residual Extropy of Order Statistics. Statistics and Probability Letters 133: 1522.CrossRefGoogle Scholar
Qiu, G. & Jia, K. (2018b). Extropy estimators with applications in testing uniformity. Journal of Nonparametric Statistics 30(1): 182196.CrossRefGoogle Scholar
Qiu, G., Wang, L. & Wang, X. (2019). On extropy properties of mixed systems. Probability in the Engineering and Informational Sciences 33(3): 471486.CrossRefGoogle Scholar
Sarda, P. (1993). Smoothing parameter selection for smooth distribution functions. Journal of Statistical Planning and Inference 35(1): 6575.CrossRefGoogle Scholar
Shaked, M. & Shanthikumar, J.G. (2007). Stochastic Orders, New York: Springer Verlag.CrossRefGoogle Scholar
Shannon, C. & Weaver, W. (1949). The Mathematical Theory of Communication., Champaign: University of Illinois Press.Google Scholar
Shannon, C.E. (1948). A mathematical theory of communication. Bell System Technical Journal 27(3): 379432.CrossRefGoogle Scholar
Silverman, B.W. 1986. Density estimation for statistics and data analysis. Monographs on Statistics and Applied Probability. New York: Springer.Google Scholar
Taneja, H.C., Kumar, V. & Srivastava, R. (2009). A dynamic measure of inaccuracy between two residual lifetime distributions. International Mathematical Forum 4(25): 12131220.Google Scholar
Figure 0

Figure 1. Graphs of $ J(X_{1:n},X ; t) $ for different values of times (left panel) and sample sizes (right panel) on several values of parameter λ in Example 2.5.

Figure 1

Figure 2. Graphs of $ J(X_{1:n},X ; t) $ for different values of times (left panel) and sample sizes (right panel) on several values of parameter a in Example 2.6.

Figure 2

Figure 3. Graphs of $ J(X_{1:n},X ; t) $ for different values of times (left panel) and sample sizes (right panel) on several values of parameter b in Example 2.7.

Figure 3

Table 1. Estimation of AB and RMSE of $ \widehat{J}(X_{i:n}, X; t) $ for Exponential distribution with mean $ 1/\lambda $ on sample size n = 50

Figure 4

Table 2. Estimation of AB and RMSE of $ \widehat{J}(X_{i:n}, X; t) $ for Exponential distribution with mean $ 1/\lambda $ on sample size n = 200

Figure 5

Table 3. Estimation of AB and RMSE of $ \widehat{J}(X_{i:n}, X; t) $ for Beta distribution in Example 2.6 on sample size n = 50

Figure 6

Table 4. Estimation of AB and RMSE of $ \widehat{J}(X_{i:n}, X; t) $ for beta distribution in Example 2.6 on sample size n = 200

Figure 7

Table 5. Estimation of AB and RMSE of $ \widehat{J}(X_{i:n}, X; t) $ for Uniform distribution in Example 2.7 on sample size n = 50

Figure 8

Table 6. Estimation of AB and RMSE of $ \widehat{J}(X_{i:n}, X; t) $ for Uniform distribution in Example 2.7 on sample size n = 200

Figure 9

Table 7. Model selection criteria for the number of casualties data

Figure 10

Figure 4. Estimation of $ J(X_{1:n},X ; t) $ for the number of casualties data