Hostname: page-component-76fb5796d-qxdb6 Total loading time: 0 Render date: 2024-04-27T00:07:44.865Z Has data issue: false hasContentIssue false

A new measure of inaccuracy for record statistics based on extropy

Published online by Cambridge University Press:  10 March 2023

Majid Hashempour
Affiliation:
Department of Statistics, University of Hormozgan, P. O. Box 3995, Bandar Abbas, Hormozgan 7916193145, Iran
Morteza Mohammadi*
Affiliation:
Department of Statistics, University of Zabol, P. O. Box 98615-538, Zabol, Sistan and Baluchestan, Iran
*
*Corresponding author. E-mail: mo.mohammadi@uoz.ac.ir
Rights & Permissions [Opens in a new window]

Abstract

We introduce a new measure of inaccuracy based on extropy between distributions of the nth upper (lower) record value and parent random variable and discuss some properties of it. A characterization problem for the proposed extropy inaccuracy measure has been studied. It is also shown that the defined measure of inaccuracy is invariant under scale but not under location transformation. We characterize certain specific lifetime distribution functions. Nonparametric estimators based on the empirical and kernel methods for the proposed measures are also obtained. The performance of estimators is also discussed using a real dataset.

Type
Research Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press.

1. Introduction

Shannon [Reference Shannon29] introduced the concept of entropy, which is widely used in the fields of probability, statistics, information theory, communication theory, economics, physics, and so forth. In information theory, entropy is a measure of the uncertainty associated with a random variable. For more details, see [Reference Cover and Thomas6].

Suppose that X and Y are two nonnegative random variables representing time to failure of two systems with probability density functions (p.d.f.s) f(x) and g(x), respectively. Suppose that F(x) and G(Y) are failure distributions or cumulative density functions (c.d.f.s) and that $\bar{F}(x) $ and $\bar{G}(x) $ are survival functions (SFs) of X and Y, respectively. Shannon’s measure of entropy in [Reference Shannon29] and Kerridge’s measure of inaccuracy in [Reference Kerridge12] associated with the random variable X are given by

(1) \begin{eqnarray} {H}(\,f) = -E(\log f(X)) =-\int ^{+\infty}_{-\infty} f(x)\, \log f(x)\, {\rm d}x \end{eqnarray}

and

(2) \begin{eqnarray} {H}(\,f, g) =-\int ^{+\infty}_{-\infty} f(x)\, \log g(x)\, {\rm d}x, \end{eqnarray}

respectively, where “log” means the natural logarithm. In the case where $g(x) = f(x)$, then Eq. (2) reduces to Eq. (1).

The measure of information and inaccuracy are associated as $H(\,f, g) =H(\,f ) + H(\,f|g)$, where $H(\,f|g)$ represents the relative information measure of X about Y (see [Reference Kullback13]), defined as

\begin{eqnarray*} {H}(\,f|g) =\int^{\infty}_0 f(x)\, \log \frac{f(x)}{g(x)}\,{\rm d}x. \end{eqnarray*}

1.1. Extropy

Recently, Lad et al. [Reference Lad, Sanfilippo and Agro15] proposed an alternative measure of uncertainty of a random variable called extropy. The term extropy, as an antonym to entropy, had been used earlier in academic literature as well. Extropy was defined by Max More, and it is “the extent of a living or organizational system’s intelligence, functional order, vitality, energy, life, experience, and capacity and drive for improvement and growth”. Extropy is not a rigorous technical term defined in the philosophy of science. In a metaphorical sense, it is simply expressed as the opposite of entropy. The extropy of the random variable X is defined by [Reference Lad, Sanfilippo and Agro15] as follows:

\begin{eqnarray*} J(X) =-\frac{1}{2}\int ^{\infty}_{0} f^2(x)\, {\rm d}x. \end{eqnarray*}

The properties of this measure such as the maximum extropy distribution and statistical applications were presented by [Reference Lad, Sanfilippo and Agro15].

The characterization results and symmetric properties of the extropy of order statistics and record values were derived in [Reference Qiu22]. Two estimators for the extropy of an absolutely continuous random variable were introduced by [Reference Qiu and Jia21]. Qiu et al. [Reference Qiu, Wang and Wang23] explored an expression of the extropy of a mixed system’s lifetime. Yang et al. [Reference Yang, Xia and Hu32] studied the relations between extropy and variational distance and determined the distribution that attains the minimum or maximum extropy among these distributions within a given variation distance from any given probability distribution. Also, for more details, see [Reference Pakdaman and Haashempour19, Reference Pakdaman and Hashempour20] and references therein. Jahanshahi et al. [Reference Jahanshahi, Zarei and Khammar11] introduced an alternative measure of uncertainty of nonnegative random variable X, which they called the cumulative residual extropy (CRJ), given as

(3) \begin{eqnarray} \xi J(F) =-\frac{1}{2}\int ^{\infty}_{0} \bar{F}^2(x)\, {\rm d}x. \end{eqnarray}

They studied some properties of the aforementioned information measure. The measure (3) is not applicable to a system having survived for some units of time. Hence, Sathar and Nair [Reference Sathar and Nair27] proposed a dynamic version of CRJ (called dynamic survival extropy) to measure the residual uncertainty of lifetime random variable X as follows:

(4) \begin{equation} \xi J(X; t) =-\frac{1}{2\bar{F}^2(t)}\int_{t}^{\infty} \bar{F}^2 (x)\,{\rm d}x, \quad t\geq 0. \end{equation}

Analogous to CRJ, Nair and Sathar [Reference Nair and Sathar17] introduced a cumulative extropy useful in measuring the inactivity of a system. It is a dual concept of the CRJ and is suitable to measure the uncertainty on past lifetimes of the system. It is defined as

\begin{eqnarray*} \bar{\xi} J(X) =-\frac{1}{2}\int ^{\infty}_{0} F^2(x)\, {\rm d}x. \end{eqnarray*}

In analogy with Eq. (4), [Reference Nair and Sathar17] proposed a dynamic cumulative extropy for the past lifetime, defined by

\begin{equation*} \bar{\xi} J(X; t) =-\frac{1}{2 F^2(t)}\int_{0}^t F^2 (x)\,{\rm d}x, \quad t\geq 0. \end{equation*}

Recently, [Reference Mohammadi and Hashempour16] proposed interval weighted cumulative residual extropy (WCRJ) and weighted cumulative past extropy (WCPJ), respectively, as follows:

\begin{equation*} {\rm WCRJ}(X; t_1, t_2) =-\frac{1}{2}\int^{t_2}_{t_1} \varphi(x) \left( \frac{\bar{F} (x)}{\bar{F} (t_1)-\bar{F} (t_2)} \right)^2 \, {\rm d}x \end{equation*}

and

\begin{equation*} {\rm WCPJ}(X; t_1, t_2) =-\frac{1}{2}\int^{t_2}_{t_1} \varphi(x) \left( \frac{F (x)}{F (t_2)-F (t_1)} \right)^2 \, {\rm d}x. \end{equation*}

They proposed some properties and nonparametric estimators of these measures.

1.2. Record values

Record values can be viewed as order statistics from a sample whose size is determined by the values and the order of occurrence of the observations. Suppose that $X_1, X_2, \ldots $ are a sequence of independent and identically distributed (i.i.d.) random variables with continuous c.d.f. F(x) and p.d.f. f(x). The order statistics of a sample is defined by the arrangement of $X_1, \ldots, X_n$ from the smallest to the largest, denoted as $X_{1:n}, \ldots, X_{n:n}$. An observation ${X_k}$ will be called an upper record value if its value exceeds that of all previous observations. Thus, X k is an upper record if $X_k \gt X_i$ for every i < k. An analogous definition can be given for a lower record value. Suppose that T(k) is the time at which the kth record value is observed. Since the first observation is always a record value, we have $T(1) = 1, \ldots, T( k +1) =\min \{i : X_i \gt X_{T( k)} \}$, where T(0) is defined to be 0. The sequence of upper record values can thus be defined by $ U_k= X_{T( k)},\ k= 1, 2, \ldots$. Let $S( k) = T( k+ 1)-T( k)$ denote the inter-record time between the kth record value and the $(k+1)$th record value, and let the kth record value $X_{T(k)}$ be denoted by X k for simplicity. It can be easily shown that the p.d.f. of the kth upper record values X k and kth lower record values X k are given as

\begin{eqnarray*} f^u_k(x)=\frac{\left[ -\log(\bar{F}(x)) \right]^{k-1}f(x)}{(k-1)!} \end{eqnarray*}

and

\begin{eqnarray*} f^l_k(x)=\frac{\left[ -\log(F(x)) \right]^{k-1}f(x) }{(k-1)!}, \end{eqnarray*}

where $(k-1)!=\Gamma(k)$ denotes the gamma function defined as $\Gamma(k)=\int^\infty_0 x^{k-1}\,{\rm e}^{-x}\,{\rm d}x.$

We refer the reader to [Reference Arnold, Balakrishnan and Nagaraja2] and [Reference Ahsanullah1] and references therein for more details on the theory and applications of record values. The information measures for record values have been investigated by several authors, including [Reference Baratpour, Ahmadi and Arghami4, Reference Ebrahimi, Soofi and Zahedi7, Reference Kumar14, Reference Raqab and Awad24].

A question is how we can determine the amount of inaccuracy contained in a set of record values from a sequence of i.i.d. random variables. Based on this idea, in this relevance, we suggest a measure of inaccuracy between a parent random variable and the distributions of nth record value based on extropy. This paper is organized as follows: In Section 2, we determine the amount of inaccuracy contained in a set of record values from a sequence of i.i.d. random variables and study a characterization result based on this measure. In Sections 3 and 4, the measure of inaccuracy for some distributions has been studied. In Section 5, nonparametric estimators for the proposed measures are also obtained. Further, in Section 6, we use a Monte Carlo simulation study to evaluate our estimators. Finally, Section 7 investigates the behavior of the proposed estimators for a real dataset.

2. A new measure of inaccuracy for record statistics

In this section, we introduce and study a new measure of uncertainty between the distributions of nth record value and parent random variable based on extropy.

Definition 2.1. Let X and Y be nonnegative continuous random variables with p.d.f.s f(x) and g(x) and c.d.f.s F(x) and G(x), respectively. Then the “extropy-inaccuracy” measure between the distributions X and Y is defined as

(5) \begin{eqnarray} J(\,f, g) =-\frac{1}{2}\int ^{\infty}_{0} f(x) g(x)\, {\rm d}x. \end{eqnarray}

In what follows, let f(x) and g(x) be density functions of the lifetime random variables X and Y, respectively. It is assumed that f(x) is the actual density function corresponding to the observation and g(x) is the density function assigned by the experimenter. Also, the discrimination information based on extropy and inaccuracy between density functions f(x) and g(x) can be defined by

(6) \begin{eqnarray} J(\,f | g) =\frac{1}{2}\int ^{\infty}_{0} \left[f(x)-g(x) \right] f(x)\, {\rm d}x; \end{eqnarray}

for more details, see [Reference Raqab and Qiu25].

The extropy-inaccuracy measure can be viewed as a generalization of the idea of extropy. This measure is a useful tool for the measurement of errors in experimental results. In fact, the extropy inaccuracy measure can be expressed as the sum of an uncertainty measure and discrimination measure between two distribution. When an experimenter states the probability of various events in an experiment, the statement can lack precision in two ways: one resulting from incorrect information (e.g., mis-specifying the model) and the other from vagueness in the statement (e.g., missing observation or insufficient data). All estimation and inference problems are concerned with making statements, which may be inaccurate in either or both of these ways. The extropy-inaccuracy measure can take account for these two types of errors.

This measure has its application in statistical inference and estimation. Also, some concepts in reliability studies for modeling lifetime data such as failure rate and mean residual life function can be described using extropy-inaccuracy measure. In lifetime studies, the data are generally truncated. Hence, there is scope for extending information theoretic concepts to ordered situations and record values. Motivated by this, we extend the definition of inaccuracy to the extropy-inaccuracy measure. Further, we also look into the problem of characterization of probability distributions using the functional form of these measures. Also, the identification of an appropriate probability distribution for lifetimes is one of the basic problems encountered in reliability theory. Although several methods such as probability plots, the goodness of fit procedures, etc., are available in the literature to find an appropriate model followed by the observations, they fail to provide an exact model. A method to attain this goal can be to utilize an extropy-inaccuracy measure.

According to [Reference Nath18], we provide the following basic properties of extropy-inaccuracy measure in Eq. (5):

  1. 1. In case $f(x) = g(x)$, then $ J(\,f, g) $ reduces to extropy and achieves its minimum value.

  2. 2. The extropy inaccuracy measure is symmetric, that is, $ J(\,f, g)= J(g,f)$.

  3. 3. Finiteness of extropy inaccuracy measure: A necessary condition for $ J(\,f, g) $ to be finite is that F and G should be absolutely continuous with respect to each other, that is, $ F\equiv G $. It is obvious that $ J(\,f, g)\leq 0 $ .

  4. 4. $J(\,f|h) -J(\,f|g) =J(\,f, h) -J(\,f, g)$.

  5. 5. Convergence of extropy inaccuracy measure: Let there be a sequence of p.d.f.s $\{g_n(x)\}_{n=1}^{\infty}$ such that $ \lim_{n\rightarrow \infty} g_n(x)= f(x) $ almost everywhere. If $ \vert g_n(x) \vert \leq M $, for all $ n \in N $, where N is the set of positive integers, then

    $$ \vert g_n(x) f(x) \vert \leq M N^{\prime}, $$
    where $ f(x)\leq N^{\prime} $ and $ \lim_{n\rightarrow \infty} g_n(x){\ }f(x)= f^2(x) $ almost everywhere, so that by Lebesgue convergence theorem,
    $$ \lim_{n\rightarrow \infty} J(\,f,g_n)= J(\,f). $$

The inaccuracy and extropy-inaccuracy measures are not competing but are rather complementary. However, the properties of symmetry, finiteness, and simplicity in calculations can be considered as the advantages of extropy-inaccuracy measure over Kerridge’s inaccuracy measure. The most important advantage of extropy is that it is easy to compute, and it will therefore be of great interest to explore its important potential applications in developing goodness-of-fit tests and inferential methods.

In Example 2.2, the behaviors of the extropy-inaccuracy measure and Kerridge’s inaccuracy measure are compared for the exponential distribution.

Example 2.2. We consider the exponential distribution with mean values of 0.5 for f(x) and λ −1 for g(x). The Kerridge’s inaccuracy measure $H(\,f,g)$ and extropy-inaccuracy measure $J(\,f,g)$ can be obtained as

\begin{align*} H(\,f,g)&=-\frac{1}{2} \left(\log\left(\lambda\right)-\frac{\lambda}{2} \right),\\ J(\,f,g)&=-\frac{\lambda}{\lambda+2}. \end{align*}

Plots of these two measures are shown in Figure 1. It is observed that the Kerridge’s inaccuracy measure $H(\,f,g)$ does not have a monotone behavior with respect to the parameter lambda, while our proposed inaccuracy measure $J(\,f,g)$ has a monotone behavior with respect to the parameter lambda. In Figure 1, it can be seen that with the increase in the parameter lambda, the value of $J(\,f,g)$ decreases uniformly because the distance between f(x) and g(x) density functions increases. Also, when lambda tends to zero, the value of $H(\,f,g)$ will be infinite, while that of $J(\,f,g)$ is finite and nonpositive.

Figure 1. Plots of Kerridge’s inaccuracy measure $H(\,f,g)$ and extropy-inaccuracy measure $J(\,f,g)$ for the exponential distribution with mean values of 0.5 for f(x) and λ −1 for g(x).

In the following result, we obtain the measure of inaccuracy between the distributions of nth record value and parent random variable.

Proposition 2.3. Let X be a nonnegative continuous random variable with p.d.f. f(x) and c.d.f. F(x). Then

\begin{eqnarray*} J(\,f^u_n, f)= J(\,f) + J(\,f^u_n |\, f). \end{eqnarray*}

Proof. We have

(7) \begin{eqnarray} J(\,f^u_n , f) &=&-\frac{1}{2}\int ^{\infty}_{0} f^u_n (x) f(x)\, {\rm d}x\\ &=& -\frac{1}{2}\int ^{\infty}_{0} \left[ f^u_n (x) f(x) +{f ^u_n}^2(x)-{f^u_n}^2(x) \right] \, {\rm d}x \nonumber \\ &=& -\frac{1}{2}\int ^{\infty}_{0} \left[ f^u_n (x) f(x) -{f ^u_n}^2(x)\right]\,{\rm d}x -\frac{1}{2}\int ^{\infty}_{0}{f^u_n}^2(x)\, {\rm d}x \nonumber \\ &=& -\frac{1}{2}\int ^{\infty}_{0} f^u_n (x) \left[ f(x) -f ^u_n(x)\right]\,{\rm d}x -\frac{1}{2}\int ^{\infty}_{0}{f^u_n}^2(x)\, {\rm d}x \nonumber \\ &=& J(\,f) + J(\,f^u_n |\, f), \nonumber \end{eqnarray}

where $J(\,f_n)$ and $J(\,f_n^u |\, f )$ denote the measure of extropy and the measure of relative information based on extropy, respectively. Also, the measure of information and inaccuracy is associated as $J(\,f )= J(\,f_n^u, f)-J(\,f_n^u |\, f)$.

In rest of the section, we show that the inaccuracy measure defined above characterizes the distribution function of parent random variable X uniquely. To prove this, we use the following lemma.

Lemma 2.4 A complete orthogonal system for the space L 2 is given by the sequence of Laguerre functions

\begin{equation*} \psi_n(x)=\frac{1}{n!}L_n(x)\, {\rm e}^{-x/2}, \end{equation*}

where $L_n(x)$ is the Laguerre polynomial defined as the sum of coefficients of ${\rm e}^{-x}$ in the nth derivative of $x^n \,{\rm e}^{-x}$, that is,

\begin{equation*} L_n(x)={\rm e}^{x} \frac{{\rm d}^n}{{\rm d}x^n} \left[x^n {\rm e}^{-x}\right]=\sum_{s=0}^n (-1)^s x^s \displaystyle{n\choose s}n(n-1)\cdots (s+1). \end{equation*}

The completeness of Laguerre functions in L 2 means that if $f \in L_2$ and $\int^\infty_0 f(x)L_n(x) {\rm e}^{-x/2}=0,$ then f is zero almost everywhere.

Theorem 2.5. Suppose that X 1 and X 2 are two nonnegative random variables with p.d.f.s $f_1(x)$ and $f_2(x)$ and continuous c.d.f.s $F_1(x)$ and $F_2(x)$, respectively. Then F 1 and F 2 belong to the same family of distributions but for change in location if and only if

\begin{equation*} J(\,f^u_{n,1}(x), f_1(x))= J(\,f^u_{n,2}(x), f_2(x)), \end{equation*}

where $f_{n,1}$ and $f_{n,2}$ are the p.d.f.’s of nth record value for the parent distributions f 1 and f 2, respectively.

Proof. We prove the sufficiency part. For all $n \geq 1$, suppose $ J(\,f^u_{n,1}(x), f_1(x))= J(\,f^u_{n,2}(x), f_2(x))$. Then

\begin{equation*} -\frac{1}{2}\int ^{\infty}_{0} f^u_{n,1} (x) f_1(x)\, {\rm d}x=-\frac{1}{2}\int ^{\infty}_{0} f^u_{n,2} (x) f_2(x)\, {\rm d}x. \end{equation*}

Moreover,

\begin{eqnarray*} -\frac{1}{2}\int ^{\infty}_{0} \frac{\left[ -\log(\bar{F_1}(x)) \right]^{n-1} f^2_1(x) }{(n-1)!}\,{\rm d}x= -\frac{1}{2} \int ^{\infty}_{0} \frac{\left[ -\log(\bar{F_2}(x)) \right]^{n-1} f^2_2(x)}{(n-1)!}\,{\rm d}x. \end{eqnarray*}

Using substitutions $u_1=-\log(\bar{F_1}(x))$ and $u_2=-\log(\bar{F_1}(x))$, we obtain

\begin{eqnarray*} \frac{1}{2}\int ^{\infty}_{0} u^{n-1} {\rm e}^{-u} f_1\left(F^{-1}_1\left(1-{\rm e}^{-u}\right)\right)\,{\rm d}u =\frac{1}{2}\int ^{\infty}_{0} u^{n-1}{\rm e}^{-u} f_2\left(F^{-1}_2\left(1-{\rm e}^{-u}\right)\right) u^{n-1}{\rm e}^{-u}\,{\rm d}u. \end{eqnarray*}

Thus, we have

\begin{eqnarray*} \frac{1}{2}\int ^{\infty}_{0} \left[ f_1\left(F^{-1}_1\left(1-{\rm e}^{-u}\right)\right) - f_2\left(F^{-1}_2\left(1-{\rm e}^{-u}\right)\right) \right] {\rm e}^{-u} u^{n-1}\, {\rm d}u=0. \end{eqnarray*}

It follows from Lemma 2.4 that

\begin{eqnarray*} \frac{1}{2}\int ^{\infty}_{0} \left[ f_1\left(F^{-1}_1\left(1-{\rm e}^{-u}\right)\right) - f_2\left(F^{-1}_2\left(1-{\rm e}^{-u}\right)\right) \right] {\rm e}^{-u/2} L_n(u)\, {\rm d}u=0, \end{eqnarray*}

where $L_n(u)$ is the Laguerre polynomial given in Lemma 2.4. Using Lemma 2.4, we have

$$ f_1\left(F^{-1}_1\left(1-{\rm e}^{-u}\right)\right) = f_2\left(F^{-1}_2\left(1-{\rm e}^{-u}\right)\right). $$

Substituting $w=1-{\rm e}^{-u}$ in the above expression, we get

$$ f_1\left(F^{-1}_1(w)\right) = f_2\left(F^{-1}_2\left(w\right)\right) \quad\text{for all } w \in (0, 1). $$

It is easy to show that $(F^{-1})^{\prime} (w)={\rm d}(F^{-1}(w))/{\rm d}w=1/f((F^{-1}(w)) $. Thus, we have $ (F^{-1}_1)^{\prime} (w)=(F^{-1}_2)^{\prime} (w) \ \text{for all } w \in (0, 1).$ Therefore, $F^{-1}_1 (w)=F^{-1}_2(w)+b,$ where b is a constant. The necessary part is obvious.

In what follows, we prove another result to show the effect of monotone transformations on the inaccuracy measure defined in Eq. (7). In this context, we prove the following theorem.

Theorem 2.6. Assume that X is a nonnegative and continuous random variable with p.d.f. f(x) and c.d.f. F(x). Suppose $Y=\psi(X)$, where ψ is a strictly monotonically increasing and differentiable function with derivative $\psi^{\prime}(x)$ and that G(y) and g(y) denote the distribution and density functions of Y, respectively. Assume that X n denotes the nth record value associated with X with p.d.f. f n and that Y n denotes the nth record value associated with Y with p.d.f. g n. Then

\begin{equation*} J\left(g^l_n, g\right)=J\left(\,f^l_n, \frac{f}{\psi^{\prime}\left(x\right)}\right). \end{equation*}

Proof. The p.d.f. of $Y=\psi(X)$ is $g(y)=f\left(\psi^{-1}\left(y\right)\right)/ \psi^{\prime}\left(\psi^{-1}\left(y\right)\right).$ Therefore,

\begin{eqnarray*} J\left(g^l_n, g\right)=-\frac{1}{2}\int^\infty_0 \frac{\left(-\log G\left(y\right)\right)^{n-1}}{(n-1)!} g^2(y)\, {\rm d}y . \end{eqnarray*}

This gives

\begin{eqnarray*} J\left(g^l_n, g\right)=-\frac{1}{2}\int^\infty_0 \frac{\left(-\log F \left(\psi^{-1}(y)\right)\right)^{n-1}}{(n-1)!} \left(\frac{f\left(\psi^{-1}(y)\right)}{\psi^{\prime}(\psi^{-1}(y)}) \right)^2 \,{\rm d}y. \end{eqnarray*}

By changing $x=\psi^{-1}(y)$, we get

\begin{eqnarray*} J\left(g^l_n, g\right)&=&-\frac{1}{2}\int^\infty_0 \frac{\left(-\log F\left(x\right)\right)^{n-1}}{\left(n-1\right)!} \frac{f(x)^2}{\psi^{\prime}(x)}dx\\ &=& -\frac{1}{2}\int^\infty_0 \frac{\left(-\log F \left(x\right)\right)^{n-1}f(x)}{(n-1)!} \frac{f(x)}{\psi^{\prime}(x)}\,{\rm d}x\\ &=& J\left(\,f^l_n, \frac{f}{\psi^{\prime}(x)}\right). \end{eqnarray*}

The proof is completed.

Corollary 2.7. Let X be a nonnegative absolutely continuous random variable and let us define $Y = bX +c$, where b and c are constants with b > 0. Then

\begin{equation*} J(g^l_n, g)= \frac{1}{b} J(\,f^l_n, f). \end{equation*}

Proof. The p.d.f. of $Y=bX+c$ is $g(y)=b f(\frac{y-c}{b}).$ Therefore,

\begin{eqnarray*} J(g^l_n, g)=-\frac{1}{2}\int^\infty_0 \frac{\left[-\log F\left( \frac{y-c}{b}\right)\right]^{n-1}}{(n-1)!} \left( \frac{1}{b} f\left(\frac{y-c}{b}\right) \right)^2 \,{\rm d}y. \end{eqnarray*}

By changing $x=\frac{y-c}{b}$, we get

\begin{eqnarray*} J\left(g^l_n, g\right)&=&-\frac{1}{2}\int^\infty_0 \frac{\left[-\log F\left( x\right)\right]^{n-1}}{(n-1)!} \frac{1}{b^2} f^2(x) b\, {\rm d}x \\ &=&-\frac{1}{2b}\int^\infty_0 \frac{\left[-\log F( x)\right]^{n-1}}{(n-1)!} f(x) f(x)\, {\rm d}x\\ &=& \frac{1}{b} J(\,f^l_n, f). \end{eqnarray*}

The proof is completed.

Thus, the inaccuracy measure defined in Eq. (7) is invariant under scale but not under location transformation.

3. Measure of inaccuracy for some distributions

In this section, for some continuous distributions for a random variable X, the expressions for the proposed measure read as follows:

  • 1. Exponential distribution

    Suppose that a random variable X is exponentially distributed over x > 0 with mean $1/\lambda \gt 0$. Then, $J(\,f^u_n, f)=-\frac{\lambda}{2^{n+1}}$. We observe that for a fixed value of n, the inaccuracy of the nth record value for the exponential distribution decreases with increase in the value of the parameter λ > 0. Similarly, if λ is fixed, then $J(\,f^u_n, f)$ increases with an increase in the sample size. In this case, we obtain $ ~\Psi_n=J(\,f^u_{n+1}, f)-J(\,f^u_n, f)=\frac{\lambda}{2^{n+2}},\ \text{for all } n $, that is, the nth inaccuracy differential depends on the sample size.

    In Figure 2, the functions $J(\,f^u_n, f)$ and $\Psi_n$ are, respectively, shown in the left panel and the right panel for the exponential distribution with respect to the parameter n for some selected values of $\lambda=1, 5, 10$. It is observed that $J(\,f^u_n, f)$ is nondecreasing and $\Psi_n$ is nonincreasing with respect to n for fixed values of λ. Note that $J(\,f^u_n, f)$ is nonincreasing and that $\Psi_n$ is nondecreasing in λ.

    Figure 2. Plot of $J(\,f^u_n, f)$ (left panel) and $\Psi_n$ (right panel) for the exponential distribution.

  • 2. Uniform distribution

    Let a nonnegative random variable X be uniformly distributed over (a, b), a < b, with density and distribution functions, respectively, given by

    $$ f (x) =\frac{1}{b-a} $$
    and
    $$ \quad F(x) =\frac{x-a}{b-a}, \quad~a\lt x \lt b. $$

    Then

    \begin{eqnarray*} J\left(\,f^u_n, f\right)&=&-\frac{1}{2\Gamma(n)(b-a)^2}\int^b_a \left[ -\log\left(\frac{x-a}{b-a}\right) \right]^{n-1}\,{\rm d}x \\ &=& -\frac{1}{2(b-a)}. \end{eqnarray*}

    Suppose that $\Psi_n=J(\,f^u_{n+1}, f)-J(\,f^u_n, f) $ is the nth inaccuracy differential, that is, the change in inaccuracy in observing the record value from the nth to the $(n + 1)$th. In the case of uniform distribution, $\Psi_n=J(\,f^u_{n+1}, f)-J(\,f^u_n, f)=0$, that is, for the uniform distribution, the measure of inaccuracy $J(\,f^u_n, f)$ remains constant for all n.

  • 3. Weibull distribution

    Assume that a nonnegative random variable X has a Weibull distribution over x > 0 with p.d.f. given by

    $$f (x) =\lambda \delta x^{\delta-1} \, \exp\{-\lambda x^\delta \}, \quad \lambda,\delta \gt 0,$$
    where λ and δ are scale and shape parameters, respectively. Then the SF is
    $$ \quad \bar{F}(x) =\exp \{-\lambda x^\delta \}. $$

    We have

    (8) \begin{eqnarray} J(\,f^u_n, f)=- \frac{\delta \lambda^{-\delta}}{2^{n+2-\delta^{-1}}}\frac{\Gamma(n+1-\delta^{-1})}{\Gamma(n)}. \end{eqnarray}

    In particular, for δ = 1, Eq. (8) reduces to the nth record inaccuracy for the exponential distribution. The nth inaccuracy differential is given as follows:

    \begin{eqnarray*} \Psi_n=\frac{\Gamma(n+1-\delta^{-1})}{\Gamma(n+1)} \frac{\delta\lambda^{-\delta}(n+\delta^{-1}-1)}{2^{n+3-\delta^{-1}}}. \end{eqnarray*}

    In Figure 3, the functions $J(\,f^u_n, f)$ and $\Psi_n$ are, respectively, shown in the left panel and right panel for the Weibull distribution with respect to the parameter n for some selected values of $\lambda=1, 3$ and $\delta=1, 3$. It is observed that $J(\,f^u_n, f)$ is nondecreasing and that $v\Psi_n$ is nonincreasing with respect to n for fixed values of λ and δ. Note that $J(\,f^u_n, f)$ is nonincreasing and that $\Psi_n$ is nondecreasing in δ. Also, $J(\,f^u_n, f)$ is nondecreasing and $\Psi_n$ is nonincreasing in λ.

    Figure 3. Plot of $J(\,f^u_n, f)$ (left panel) and $\Psi_n$ (right panel) for the Weibull distribution.

4. Measure of inaccuracy for F β distributions

In this section, we introduce the inaccuracy measure for nth lower record value for some F β distributions. The F β distributions have been widely studied in statistics because of their wide applicability in the modeling and analysis of lifetime data. This model is flexible enough to accommodate both monotonic and nonmonotonic failure rates even though the baseline failure rate is monotonic.

Let X and Y be nonnegative continuous random variables with p.d.f.s f(x) and g(x) and c.d.f.s F(x) and G(x), respectively. Also, they satisfy the proportional reversed hazard rate model with proportionality constant β > 0, if

(9) \begin{equation} G(x)=F^{\beta}(x), \end{equation}

where F(x) is the baseline distribution and G(x) can be considered as some reference distributions (see [Reference Gupta and Nanda9]).

Remark 4.1. Let $X_1, X_2, \ldots, X_m$ be i.i.d. random variables with c.d.f. F(x), representing the lifetime of components in an m-component parallel system. Then the lifetime of the parallel system is given by $Z= \max \{X_1, X_2, \ldots, X_m \}$ with c.d.f. G(x) given by Eq. (9).

  • 1. Power function (PF) distribution

    A random variable X is said to have PF distribution if its c.d.f. and p.d.f. are given by

    $$ G(x)=\left( \frac{x}{\theta} \right)^\beta,\quad~ x \in (0, \theta), ~\theta \gt 0, ~\beta \gt 0, $$
    and
    $$ g(x)=\frac{\beta}{\theta} \left( \frac{x}{\theta} \right)^{\beta-1}. $$

    We get

    $$ J(g^l_n, g)=- \frac{\beta}{2\theta (2-\frac{1}{\beta})^n}, $$
    and
    $$ J(g^l_{n+1}, g)=- \frac{\beta}{2\theta (2-\frac{1}{\beta})^{n+1}}. $$

    Moreover,

    $$\Psi_n = J(g^l_{n+1}, g) - J(g^l_n, g)=\frac{\beta-1}{2\theta}\left( \frac{\beta}{2\beta-1} \right)^{n+1}, $$
    which means that the difference between inaccuracy measures of two consecutive record values from the PF distribution depends on n. Putting β = 1, it is reduced to the uniform distribution, and also $\Psi_n =0$.

    In Figure 4, the functions $J(g^l_n, g)$ and $\Psi_n$ are, respectively, shown in the left panel and right panel, for the PF distribution with respect to the parameter n for some selected values of $\theta=1, 4$ and $\beta=2, 4$. It is observed that $J(g^l_n, g)$ is nondecreasing and that $\Psi_n$ is nonincreasing with respect to n for fixed values θ and β. Note that $J(g^l_n, g)$ is nondecreasing and that $\Psi_n$ is nonincreasing in θ. Also, $J(g^l_n, g)$ and $\Psi_n$ are not monotone in β.

    Figure 4. Plot of $J(g^l_n, g)$ (left panel) and $\Psi_n$ (right panel) for the PF distribution.

  • 2. Gompertz–Verhulst Exponentiated (GVE) distribution

    GVE distribution is defined by the c.d.f. and p.d.f. given as follows:

    $$ G(x)=\left( 1-\rho \exp\{-\lambda x\}\right)^\beta$$

    and

    $$ g(x)=\beta\left( 1-\rho\, \exp\{-\lambda x\}\right)^{\beta-1}\rho \lambda\, \exp\{-\lambda x\}.$$

    Then,

    $$ J(g^l_n, g)=-\frac{\lambda \beta}{2} \left[\frac{1}{(2-\frac{1}{\beta})^n}-{\frac{1}{2^n}} \right]$$
    and
    $$ J(g^l_{n+1}, g)=-\frac{\lambda \beta}{2} \left[\frac{1}{(2-\frac{1}{\beta})^{n+1}}-{\frac{1}{2^{n+1}}} \right].$$

    Thus, we have

    $$\Psi_n = J(g^l_{n+1}, g) - J(g^l_n, g)= \frac{\lambda \beta}{2} \left[ \frac{\beta-1}{(2\beta-1)(2-\frac{1}{\beta})^{n}}-\frac{1}{2^{n+1}} \right], $$
    which means that the difference between inaccuracy measures of two consecutive record values from GVE distribution depends on n.

    In Figure 5, the functions $J(g^l_n, g)$ and $\Psi_n$ are, respectively, shown in the left panel and right panel for the GVE distribution with respect to the parameter n for some selected values of $\lambda=2, 4$ and $\beta=2, 4$. It is observed that $J(g^l_n, g)$ and $\Psi_n$ are not monotone with respect to n for fixed values λ and β. Note that $J(g^l_n, g)$ is nonincreasing and that $\Psi_n$ is nondecreasing in λ. Also, $J(g^l_n, g)$ is nondecreasing and $\Psi_n$ is not monotone in β.

    Figure 5. Plot of $J(g^l_n, g)$ (left panel) and $\Psi_n$ (right panel) for the GVE distribution.

    The exponentiated exponential (EE) distribution introduced by [Reference Gupta and Kundu8] has some interesting physical interpretations. A random variable X is said to have the EE distribution if its c.d.f. and p.d.f. are given by

    \begin{equation*} G(x)= \left[ 1-\exp \left(-\lambda x\right) \right]^\beta,\quad x \gt 0,\ \beta \gt 0,\ \lambda \gt 0. \end{equation*}

    In particular, the exponential distribution is obtained for β = 1. The EE distribution is the special case of the GVE distribution. When ρ = 1, we get $J(g^l_n, g)$ as same as that of GVE distribution.

5. Nonparametric estimators

In this section, we provide two methods for nonparametric estimations of the inaccuracy of nth upper (lower) record values. Recently, similar methods have been used for inaccuracy estimation in [Reference Hashempour and Mohammadi10] and [Reference Viswakala and Sathar31]. Let $X_{1},\dots,X_{n}$ be a random sample taken from a population with density function $ f(\cdot)$ and c.d.f. $ F(\cdot) $. Then a nonparametric estimator of the measure of inaccuracy between the distributions of nth upper record value and parent random variable can be achieved by

\begin{equation*} \skew3\hat{J}(\,f^u_n, f)=-\frac{1}{2} \int_{0}^{\infty } \frac{\left[ -\log(1-\widehat{F}(x)) \right]^{n-1}}{(n-1)!} \widehat{f}(x)^2\, {\rm d}x, \end{equation*}

where $ \widehat{F}(\cdot) $ and $ \widehat{f}(\cdot) $ denote the estimators of $ F(\cdot) $ and $ f(\cdot)$, respectively.

We use the empirical and kernel methods to estimate the c.d.f. F(x). In the empirical method, the c.d.f. F(x) is estimated by $ \widehat{F}_n(x)=\frac{1}{n} \sum_{i=1}^{n} I(X_i \leq x) $. We design another estimator based on the kernel-smoothed estimator of the c.d.f. since smoothed estimators perform better than nonsmoothed estimators. The kernel c.d.f. estimator is described as follows:

\begin{equation*} \widehat{F}_{h_n}(x)= \frac{1}{n} \sum_{i=1}^{n} W(\frac{x-X_i}{h_n}), \end{equation*}

where $W(\cdot)$ is the c.d.f. of a positive kernel function $K(\cdot)$ as $W(x) =\int_{-\infty}^{x} K(t)\,{\rm d}t$ and h n is a bandwidth parameter.

The $ K (\cdot) $ kernel must be selected carefully; moreover, the bandwidth or smoothing parameter h n must be selected more carefully. To select the bandwidth parameter, we use two approaches. First, we find the optimal h n by minimizing the integrated mean squared error by considering the Gaussian kernel, which gives us

\begin{equation*} h_n^{1}=0.9 \sigma n^{-1/5}. \end{equation*}

Usually, σ is estimated by $\min\{S, Q/1.34\}$, where S is the sample standard deviation and Q is the interquartile range. This choice of $h_n^{1}$ works well if the true density is very smooth. This approach was called the normal reference rule or rule-of-thumb in [Reference Silverman30].

In the second approach, the bandwidth is chosen by the cross-validation technique. Since we do not want to assume that the density of data is very smooth necessarily, it is usually better to estimate h n by using cross-validation. We use the leave-one-out cross-validation criterion defined by [Reference Sarda26] method, which considers the selection method as

\begin{equation*} h_n^{2}= \arg \min_{h_n} \frac{1}{n} \sum_{i=1}^{n} (F_{h_n,-i}(X_i)-F_{n}(X_i))^2, \end{equation*}

where $ F_{h,-i}(X_i) $ is the leave-one-out version of the kernel-smoothed estimator of c.d.f., defined as

\begin{equation*} F_{h,-i}(x) = \frac{1}{n-1} \sum_{j\neq i} W\left(\frac{x-X_j}{h_n}\right). \end{equation*}

The support set of random variables that is considered in Sections 3 and 4 is bounded. Near the boundaries, the kernel density estimates have larger errors. Kernel density estimates tend to smooth the probability mass over the boundary points. To solve this boundary problem, we use a reflection boundary technique, which was illustrated in [Reference Scott28]. In this technique, to estimate the density $ f(\cdot)$ by using sample $x_{1},\dots,x_{n}$, first add the reflection of the entire sample; that is, append $-x_{1},\dots,-x_{n}$ to the data. Then estimate a density $ s(\cdot)$ using the 2 n points, but use n points to determine the bandwidth parameter. Thus, $\widehat{f}(x) = 2 \widehat{s}(x)$. This method is applied to the kernel density estimation, and the bandwidth is selected using the rule-of-thumb and cross-validation techniques.

Similarly, a nonparametric estimator of nth lower record value and parent random variable is defined by

\begin{equation*} \skew3\hat{J}\left(g^l_n, g\right)=-\frac{1}{2} \int_{0}^{\infty } \frac{\left[ -\log\left(\widehat{G}(x)\right) \right]^{n-1}}{\left(n-1\right)!} \widehat{g}(x)^2\, {\rm d}x, \end{equation*}

where $ \widehat{G}(\cdot) $ and $ \widehat{g}(\cdot) $ denote the estimators of $ G(\cdot) $ and $ g(\cdot)$, respectively. These estimators can be computed by similar methods as discussed above for estimates $ \widehat{F}(\cdot) $ and $ \widehat{f}(\cdot) $.

6. Simulation

In this section, we utilize a Monte Carlo simulation study to assess our estimators in terms of Bias and root mean square error (RMSE). For the simulations, we consider the Weibull and GVE distributions with various parameters. The proposed estimators are computed for various values of sample sizes (20 and 50) and record values n (2, 3, and 5).

The bias and RMSE of four estimators of nth upper record value for the Weibull distribution and nth lower record value for the GVE distribution are computed and given in Tables 1 and 2, respectively. Simulation results show the density estimation with the reflection boundary technique, and as a result, the c.d.f. estimation with the kernel method by using the rule-of-thumb bandwidth selection, $ (\widehat{F}_{h_1},\widehat{f}_{h_1}) $, has the best performance between the proposed estimators under different parameters and record values. When the c.d.f. is estimated with the empirical method ( $ \widehat{F}_{n} $), estimators based on kernel density estimation with the rule-of-thumb technique ( $ \,\widehat{f}_{h_1} $) perform better than the cross-validation technique ( $ \,\widehat{f}_{h_2} $) for the bandwidth selection. The accuracy of the estimators using the cross-validation technique improves with increasing the sample size. The Bias and RMSE estimation of the proposed estimators are reduced by increasing the sample sizes or record values. These results can be seen in both Tables 1 and 2.

Table 1. Bias and RMSE estimation of $ \skew3\hat{J}(\,f^u_n, f) $ based on the Weibull distribution.

Table 2. Bias and RMSE estimation of $ \skew3\hat{J}(g^l_n, g) $ based on the GVE distribution.

7. Applications

Here, we consider the real data set to show the behavior of the estimators in real cases. This data set includes an active repair time (in hours) for an airborne communication transceiver reported by [Reference Balakrishnan, Leiva, Sanhueza and Cabrera3], which was initially given by [Reference Chhikara and Folks5]. The actual observations are listed below:

0.2, 0.3, 0.5, 0.5, 0.5, 0.5, 0.6, 0.6, 0.7, 0.7, 0.7, 0.8, 0.8, 1.0, 1.0, 1.0, 1.0, 1.1, 1.3, 1.5, 1.5, 1.5, 1.5, 2.0, 2.0, 2.2, 2.5, 2.7, 3.0, 3.0, 3.3, 3.3, 4.0, 4.0, 4.5, 4.7, 5.0, 5.4, 5.4, 7.0, 7.5, 8.8, 9.0, 10.3, 22, 24.5.

We have fitted the data set using the weibull distribution by the maximum likelihood method for parameters estimation. The obtained parameters are $\widehat{\lambda}=3.391$ and $\widehat{\delta}=0.899$, and we have a Kolmogorov–Smirnov (K-S) statistic value 0.120 with p-value 0.517.

Figure 6 shows theoretical values and nonparametric estimations of $ J(\,f^u_n, f) $ for the Weibull distribution. We can see that nonparametric estimations of $ J(\,f^u_n, f) $ are based on the density estimation with the reflection boundary technique. Moreover, the c.d.f. estimation with the kernel method by using the rule-of-thumb bandwidth selection, $ (\widehat{F}_{h_1},\widehat{f}_{h_1}) $, has less error than other estimates and is closer to the theoretical value. It can also be seen in Figure 6 that as the record value increases, all estimates approach the theoretical value.

Figure 6. Plots for comparing theoretical and nonparametric estimators of $ J(\,f^u_n, f) $ for the Weibull distribution.

8. Conclusion and future remarks

In this paper, we have extended the Kerridge measure of inaccuracy for record values, and a characterization result has been studied. Some properties of the proposed measure have been discussed. It was also shown that the defined measure of inaccuracy is invariant under scale but not under location transformation. We have characterized certain specific lifetime distribution functions. The measure has been studied for some F β distributions also. Nonparametric estimators of inaccuracy based on extropy for nth upper and lower record values have been proposed. Simulation results showed that the density estimation with the reflection boundary technique and the c.d.f. estimation with the kernel method by using the rule-of-thumb bandwidth selection have the best performance between proposed estimators. The performance of estimators based on a real data set has been discussed as well.

Acknowledgments

The authors would like to thank referees and the editor for their useful comments and constructive criticisms on the original version of this manuscript, which led to this considerably improved version.

References

Ahsanullah, M. (2004). Record values—Theory and applications. New York: University Press.Google Scholar
Arnold, B.C., Balakrishnan, N., & Nagaraja, H.N. (1998). Records, vol. 768. New York: Wiley.CrossRefGoogle Scholar
Balakrishnan, N., Leiva, V., Sanhueza, A., & Cabrera, E. (2009). Mixture inverse Gaussian distributions and its transformations, moments and applications. Statistics 43(1): 91104.CrossRefGoogle Scholar
Baratpour, S., Ahmadi, J., & Arghami, N.R. (2007). Entropy properties of record statistics. Statistical Papers 48(2): 197213.CrossRefGoogle Scholar
Chhikara, R. & Folks, J.L. (1988). The inverse Gaussian distribution: Theory, methodology, and applications, vol. 95. New York: CRC Press.Google Scholar
Cover, T.M. & Thomas, J.A. (2006). Elements of information theory, 2nd ed. Hoboken, NY: Wiley.Google Scholar
Ebrahimi, N., Soofi, E.S., & Zahedi, H. (2004). Information properties of order statistics and spacings. IEEE Transactions on Information Theory 50(1): 177183.CrossRefGoogle Scholar
Gupta, R.D. & Kundu, D. (1999). Theory and methods: Generalized exponential distributions. Australian and New Zealand Journal of Statistics 41(2): 173188.CrossRefGoogle Scholar
Gupta, R.D. & Nanda, A.K. (2001). Some results on reversed hazard rate ordering. Communications in Statistics—Theory and Methods 30(11): 24472457.CrossRefGoogle Scholar
Hashempour, M. & Mohammadi, M. (2022). On dynamic cumulative past inaccuracy measure based on extropy. Communications in Statistics—Theory and Methods 118.Google Scholar
Jahanshahi, SMA, Zarei, H., & Khammar, A.H. (2020). On cumulative residual extropy. Probability in the Engineering and Informational Sciences 34(4): 605625.CrossRefGoogle Scholar
Kerridge, D.F. (1961). Inaccuracy and inference. Journal of the Royal Statistical Society. Series B (Methodological) 23(1): 184194.CrossRefGoogle Scholar
Kullback, S. (1959). Information theory and statistics. New York: Wiley.Google Scholar
Kumar, V. (2015). Generalized entropy measure in record values and its applications. Statistics and Probability Letters 106: 4651.CrossRefGoogle Scholar
Lad, F., Sanfilippo, G., & Agro, G. (2015). Extropy: Complementary dual of entropy. Statistical Science 30(1): 4058.CrossRefGoogle Scholar
Mohammadi, M. & Hashempour, M. (2022). On interval weighted cumulative residual and past extropies. Statistics 56(5): 10291047.CrossRefGoogle Scholar
Nair, R.D. & Sathar, E.I. (2020). On dynamic failure extropy. Journal of the Indian Society for Probability and Statistics 21(2): 287313.CrossRefGoogle Scholar
Nath, P. (1968). Inaccuracy and coding theory. Metrika 13(1): 123135.CrossRefGoogle Scholar
Pakdaman, Z. & Haashempour, M. (2019). On dynamic survival past extropy properties. Journal of Statistical Research of Iran JSRI 16(1): 229244.Google Scholar
Pakdaman, Z. & Hashempour, M. (2021). Mixture representations of the extropy of conditional mixed systems and their information properties. Iranian Journal of Science and Technology, Transactions A: Science 45(3): 10571064.CrossRefGoogle Scholar
Qiu, G. & Jia, K. (2018). Extropy estimators with applications in testing uniformity. Journal of Nonparametric Statistics 30(1): 182196.CrossRefGoogle Scholar
Qiu, G. (2017). The extropy of order statistics and record values. Statistics and Probability Letters 120: 5260.CrossRefGoogle Scholar
Qiu, G., Wang, L., & Wang, X. (2019). On extropy properties of mixed systems. Probability in the Engineering and Informational Sciences 33(3): 471486.CrossRefGoogle Scholar
Raqab, M.Z. & Awad, A.M. (2001). A note on characterization based on Shannon entropy of record statistics. Statistics 35(4): 411413.CrossRefGoogle Scholar
Raqab, M.Z. & Qiu, G. (2019). On extropy properties of ranked set sampling. Statistics 53(1): 210226.CrossRefGoogle Scholar
Sarda, P. (1993). Smoothing parameter selection for smooth distribution functions. Journal of Statistical Planning and Inference 35(1): 6575.CrossRefGoogle Scholar
Sathar, E.I. & Nair, R.D. (2019). On dynamic survival extropy. Communications in Statistics—Theory and Methods 50(6): 12951313.CrossRefGoogle Scholar
Scott, D.W. (2015). Multivariate density estimation: theory, practice, and visualization. Hoboken, NJ: Wiley.CrossRefGoogle Scholar
Shannon, C.E. (1948). A mathematical theory of communication. The Bell System Technical journal 27(3): 379423.CrossRefGoogle Scholar
Silverman, B.W. (1986). Density estimation for statistics and data analysis. Monographs on Statistics and Applied Probability. New York: Chapman & Hall.Google Scholar
Viswakala, K.V. & Sathar, E.A. (2022). Kernel estimation of the dynamic cumulative past inaccuracy measure for right censored dependent data: Accepted - December 2022. REVSTAT-Statistical Journal. Available: https://revstat.ine.pt/index.php/REVSTAT/article/view/541.Google Scholar
Yang, J., Xia, W., & Hu, T. (2018). Bounds on extropy with variational distance constraint. Probability in the Engineering and Informational Sciences 33(2): 186204.CrossRefGoogle Scholar
Figure 0

Figure 1. Plots of Kerridge’s inaccuracy measure $H(\,f,g)$ and extropy-inaccuracy measure $J(\,f,g)$ for the exponential distribution with mean values of 0.5 for f(x) and λ−1 for g(x).

Figure 1

Figure 2. Plot of $J(\,f^u_n, f)$ (left panel) and $\Psi_n$ (right panel) for the exponential distribution.

Figure 2

Figure 3. Plot of $J(\,f^u_n, f)$ (left panel) and $\Psi_n$ (right panel) for the Weibull distribution.

Figure 3

Figure 4. Plot of $J(g^l_n, g)$ (left panel) and $\Psi_n$ (right panel) for the PF distribution.

Figure 4

Figure 5. Plot of $J(g^l_n, g)$ (left panel) and $\Psi_n$ (right panel) for the GVE distribution.

Figure 5

Table 1. Bias and RMSE estimation of $ \skew3\hat{J}(\,f^u_n, f) $ based on the Weibull distribution.

Figure 6

Table 2. Bias and RMSE estimation of $ \skew3\hat{J}(g^l_n, g) $ based on the GVE distribution.

Figure 7

Figure 6. Plots for comparing theoretical and nonparametric estimators of $ J(\,f^u_n, f) $ for the Weibull distribution.