Hostname: page-component-788cddb947-wgjn4 Total loading time: 0 Render date: 2024-10-09T12:04:23.584Z Has data issue: false hasContentIssue false

An inaccuracy measure between non-explosive point processes with applications to Markov chains

Published online by Cambridge University Press:  25 October 2023

Vanderlei da Costa Bueno*
Affiliation:
São Paulo University
Narayanaswamy Balakrishnan*
Affiliation:
McMaster University
*
*Postal address: Institute of Mathematics and Statistics, São Paulo University, Rua do Matão 1010, CEP 05508-090, São Paulo, Brazil. Email address: bueno@ime.usp.br
**Postal address: Department of Mathematics and Statistics, McMaster University, Hamilton, Ontario L8S 4LS, Canada. Email address: bala@mcmaster.ca
Rights & Permissions [Opens in a new window]

Abstract

Inaccuracy and information measures based on cumulative residual entropy are quite useful and have received considerable attention in many fields, such as statistics, probability, and reliability theory. In particular, many authors have studied cumulative residual inaccuracy between coherent systems based on system lifetimes. In a previous paper (Bueno and Balakrishnan, Prob. Eng. Inf. Sci. 36, 2022), we discussed a cumulative residual inaccuracy measure for coherent systems at component level, that is, based on the common, stochastically dependent component lifetimes observed under a non-homogeneous Poisson process. In this paper, using a point process martingale approach, we extend this concept to a cumulative residual inaccuracy measure between non-explosive point processes and then specialize the results to Markov occurrence times. If the processes satisfy the proportional risk hazard process property, then the measure determines the Markov chain uniquely. Several examples are presented, including birth-and-death processes and pure birth process, and then the results are applied to coherent systems at component level subject to Markov failure and repair processes.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

An alternate measure of entropy, based on the distribution function rather than the density function of a random variable, called the cumulative residual entropy (CRE), was proposed originally by Rao et al. [Reference Rao25]. It was subsequently extended to the cumulative residual inaccuracy measure by Kumar and Taneja [Reference Kumar and Taneja19].

The main inaccuracy measure for the uncertainty of two positive and absolutely continuous random variables, S and T, defined in a complete probability space $(\Omega, \Im, \mathbb{P})$ is that of Kerridge [Reference Kerridge17], given by

\begin{equation*}H(S, T) = \mathbb{E}[ - \log g(T)] = -\int_0^{\infty} f(x) \log g(x) dx,\end{equation*}

where f and g are the probability density functions of T and S, respectively.

In the case when S and T are identically distributed, the Kerridge inaccuracy measure gives the well-known Shannon entropy [Reference Shannon28], which plays an important role in many areas of science, such as probability and statistics, financial analysis, engineering and information theory; see Cover and Thomas [Reference Cover and Thomas6]. The Shannon entropy is defined as

\begin{equation*}H(T) = \mathbb{E}[ {-} \log f(T)] = -\int_0^{\infty} f(x) \log f(x) dx.\end{equation*}

A main drawback of the Shannon entropy is that for some probability distributions, it may be negative, and then it is no longer an uncertainty measure. This drawback was removed in the Varma entropy [Reference Varma29], which provides a generalization of order $\alpha$ and type $\beta$ of both the Shannon entropy and the Rényi entropy [Reference Rényi27]. The Varma entropy is important as a measure of complexity and uncertainty to describe many chaotic systems in physics, electronics, and engineering. The Varma entropy is defined as

\begin{equation*} H^{\beta}_{\alpha} (X) = \frac{1}{\beta-\alpha} \ln \left[\int_0^{\infty} f^{\alpha+\beta-1}(x) dx \right], \ \ \beta - 1 < \alpha < \beta, \ \ \beta \geq 1.\end{equation*}

It can be shown that

\begin{equation*}\lim_{\beta \rightarrow 1} H^{\beta}_{\alpha} (X) = H_{\alpha}(X) = \frac{1}{1-\alpha}\ln \left[\int_0^{\infty} f^{\alpha}(x) dx\right],\end{equation*}

which is indeed the Rényi entropy. Also, for $\beta = 1$ , if $\alpha \rightarrow 1$ , the Varma entropy reduces to the Shannon entropy.

Recently, Kumar and Taneja [Reference Kumar and Taneja18] introduced a generalized cumulative residual entropy of order $\alpha$ and type $\beta$ based on Varma entropy, and a dynamic version of it, given by

\begin{equation*} \xi^{\beta}_{\alpha}(X) = \frac{1}{\beta-\alpha} \ln \left[\int_0^{\infty} \overline{F}^{\alpha+\beta-1}(x) dx \right], \ \ \beta - 1 < \alpha < \beta, \ \ \beta \geq 1,\end{equation*}

and

\begin{equation*} \xi^{\beta}_{\alpha}(X\,;\, t) = \frac{1}{\beta-\alpha} \ln \left[\frac{ \int_t^{\infty} \overline{F}^{\alpha+\beta-1}(x) dx}{\overline{F}^{\alpha+\beta-1}(t)}\right] .\end{equation*}

Several authors subsequently studied various properties of these information measures. For example, Ebrahimi [Reference Ebrahimi9] proposed a measure of uncertainty about the remaining lifetime of a system working at time t, given by $H(T_t),$ where $T_t = (T-t | T>t).$ Kayal and Sunoj [Reference Kayal and Sunoj15] and Kayal et al. [Reference Kayal, Sunoj and Rajesh16] presented a generalization of it and discussed its theoretical properties.

Rao et al. [Reference Rao, Chen, Vemuri and Wang26] and Rao [Reference Rao25] provided an extension of the above measure, the cumulative residual entropy for T, by using the survival functions of T instead of the probability density function in the Shannon entropy. Asadi and Zohrevand [Reference Asadi and Zohrevand2] studied the corresponding dynamic measure using the conditional survival function $\mathbb{P}(T-t > x| T>t)$ . Di Crescenzo and Longobardi [Reference Di Crescenzo and Longobardi8] discussed an analogous measure, based on the distribution function, which is known as the cumulative past entropy of T.

Kerridge’s measure of inaccuracy has also been extended in a similar way by Kumar and Taneja [Reference Kumar and Taneja19, Reference Kumar and Taneja20]. Kundu et al. [Reference Kundu, Di Crescenzo and Longobardi22] considered the measures of Kumar and Taneja [Reference Kumar and Taneja19, Reference Kumar and Taneja20] and obtained several properties for random variables that are left-, right-, and double-truncated. Quite recently, bivariate extensions of cumulative residual (past) inaccuracy measures have been discussed by Ghosh and Kundu [Reference Ghosh and Kundu10] and Kundu and Kundu [Reference Kundu and Kundu21].

The cumulative residual inaccuracy measure of Kumar and Taneja [Reference Kumar and Taneja19] between S and T is defined as

\begin{equation*}\varepsilon(S, T) = - \int_0^{\infty} \overline{F}(t) \log \overline{G}(t) dt = \mathbb{E}\left[\int_0^T \Lambda_S(s)ds\right],\end{equation*}

where $\overline{F} = 1 - F$ and $\overline{G} = 1 - G$ are the reliability functions of T and S, respectively, F and G are the corresponding distribution functions, and $\Lambda_S(t) = -\log\overline{G}(t)$ is the cumulative hazard function of lifetime S. It is important to note that the expression is valid in the set $\{ t > S \wedge T \}$ , where $ S \wedge T = \min \{ S, T\}$ , and by convention, we set $ 0 \log0 = 0$ .

Indeed, $ \varepsilon(S, T)$ represents the information content when one uses $ \overline{G}(t)$ , the survival function asserted by the experimenter, instead of the true survival function $\overline{F}(t)$ , because information is missing or incorrect. Some transformation of this measure is present in the work of Psarrakos and Di Crescenzo [Reference Psarrakos and Di Crescenzo24].

Our earlier paper [Reference Bueno and Balakrishnan5] extended the definition to a symmetric inaccuracy measure based on two component lifetimes T and S, which are finite positive absolutely continuous random variables defined in a complete probability space $(\Omega, \Im, \mathbb{P})$ , with $\mathbb{P}(S \not= T )=1$ , through the family of sub- $\sigma$ -algebras $(\Im_t)_{t \geq 0}$ of $\Im$ , where

\begin{equation*}\Im_t = \sigma\{ 1_{\{S > s \}}, 1_{\{ T>s\}}, 0 \leq s< t\}\end{equation*}

satisfies Dellacherie’s conditions of right-continuity and completeness; see [Reference Dellacherie7]. Consider, through the Doob–Meyer decomposition (see Aven and Jensen [Reference Aven and Jensen3]), the unique $ \Im_t$ -predictable, integrable compensator processes $(A_t)_{t \geq 0 }$ and $(B_t)_{t \geq 0 }$ such that $ 1_{\{ T \leq t \}} - A_t $ and $ 1_{\{ S \leq t \}} - B_t $ are 0-mean $\Im_t$ -martingales. Then, by the well-known equivalence results between distribution functions and compensator processes (see Arjas and Yashin [Reference Arjas and Yashin1]), it follows that $ A_t = - \log \overline{F}(t \wedge T|\Im_t) $ and $ B_t = - \log \overline{G}(t \wedge S|\Im_t)$ . Identifying $\Lambda_S(t)$ and $B_t$ , in the set $ \{ S > t \}$ , the paper [Reference Bueno and Balakrishnan5] then established that

\begin{equation*}\varepsilon(S, T) = \mathbb{E}\left[\int_0^T B_s ds\right] = \mathbb{E}[ 1_{\{ S \leq T \}} |T - S|].\end{equation*}

Also, by using the same arguments as above, we have

\begin{equation*}\varepsilon(T,S) = \mathbb{E}\left[\int_0^S A_s ds\right] = \mathbb{E}[ 1_{\{ T \leq S \}} (S - T)] = \mathbb{E}[ 1_{\{ T \leq S \}} |S - T|].\end{equation*}

In the following definition, we now present a symmetric generalization of the Taneja–Kumar inaccuracy measure.

Definition 1. Let S and T be continuous positive random variables defined in a complete probability space $(\Omega, \Im, P)$ . Then the cumulative residual inaccuracy measure is defined as

\begin{equation*} CRI_{S,T} = CRI_{T,S} = \varepsilon(S, T)+ \varepsilon(T, S) = \mathbb{E}\left[ \int_0^T B_s ds\right] + \mathbb{E}\left[\int_0^S A_s ds\right]\end{equation*}
\begin{equation*} = \mathbb{E}[ 1_{\{ S \leq T \}} |T - S|] + \mathbb{E}[ 1_{\{ T \leq S \}} |S - T|] = \mathbb{E}[|T - S|].\end{equation*}

Thus, $ CRI_{T,S}$ can be seen as a dispersion measure when using a lifetime S asserted by the experimenter’s information as the true lifetime T. Provided we identify random variables that are equal almost everywhere, $ CRI_{S,T}$ is a metric in the $L^1$ space of random variables. Hence, if we have $ CRI_{T,S} = 0,$ we can conclude that the survival function asserted by the experimenter is indeed the true one.

Remark 1. If, in Definition 1, T and S are independent and identically distributed, then $E[| T-S|]$ is known as the Gini mean difference (GMD), introduced by Gini [Reference Gini11]. As a dispersion measure it can be compared with the variance of $T-S$ . This point generated a debate between Gini and the Anglo-Saxon statisticians. The most popular presentation of the variance is as a second central moment of the distribution. The most popular form of the GMD is the expected absolute difference between two random variables that are independent and identically distributed. However, as shown by Hart [Reference Hart14] and the covariance presentation, the GMD can also be defined as a central moment. Had both sides known about the alternative representations of GMD, this debate, which was a source of conflict between the Italian school and what Gini viewed as the Western schools, could have been averted; see Gini [Reference Gini11, Reference Gini12]. The interested reader can find many results concerning GMD in La Haye and Zizler [Reference La Haye and Zizler23].

Remark 2. It is of interest to obtain the same result considering the appropriate integration domain from 0 to the maximum $( \max \{ S, T\} = S\vee T)$ of the series system $( \min \{ S, T\} = S\wedge T)$ compensator process $A_s + B_s$ ; see Aven and Jensen [Reference Aven and Jensen3]:

\begin{align*}\mathbb{E}\left[ \int_0^{S\vee T } (A_s + B_s) ds \right] &= \mathbb{E}\left[ \int_0^{S\vee T } \left(\int_0^s dA_u \right)ds + \int_0^{S\vee T } \left(\int_0^sdB_u \right)ds\right] \\&= \mathbb{E}\left[ \int_0^{S\vee T } \left(\int_u^{S\vee T } ds \right) dA_u + \int_0^{S\vee T } \left(\int_u^{S\vee T } ds \right) dB_u \right] \\&= \mathbb{E}\left[\int_0^{S\vee T } (S\vee T - u) dA_u + \int_0^{S\vee T } (S\vee T - u ) dB_u \right] \\&= \mathbb{E}[ (S\vee T - T) 1_{ \{ T \leq S\vee T \}} + (S\vee T - S) 1_{ \{S \leq S\vee T \}}] \\&= \mathbb{E}[ (S - T) 1_{ \{ T \leq S \}} + 0 + ( T - S) 1_{ \{S \leq T \}}] =\mathbb{E}[| T - S|].\end{align*}

In the framework of univariate point processes and martingale theory, we analyze here an inaccuracy measure between two point processes related to Markov chains. The rest of this paper consists of two sections. Section 2 deals with the cumulative inaccuracy measure concept for non-explosive point processes and its applications to a minimal repair point process and to a minimally repaired coherent system. In Section 3, the cumulative inaccuracy measure is specialized to point process occurrence times relating to Markov chains. Special attention is paid to the case when the processes satisfy proportional risk process properties, in which case we characterize the Markov chain through the cumulative inaccuracy measure. We demonstrate the theoretical results with several examples of applications to birth-and-death processes and pure birth processes. We also apply the results to a coherent system, observed physically, at component level, subjected to fail and repair according to a Markovian property.

2. Inaccuracy measure between point processes

2.1. Cumulative inaccuracy measure between non-explosive point processes

A univariate point process over $\mathbb{R}^+$ can be described by an increasing sequence of random variables or by means of its corresponding counting process.

Definition 2. A univariate point process is an increasing sequence $ T = (T_n)_{n \geq 0}$ , with $T_0 = 0$ , of positive extended random variables, $ 0 \leq T_1 \leq T_2 \leq \ldots$ , defined in a complete probability space $(\Omega, \Im, \mathbb{P})$ . The inequalities are strict unless $T_n = \infty$ . If $T_{\infty} = \lim_{n \rightarrow \infty} T_n = \infty$ , the point process is said to be non-explosive.

Another equivalent way of describing a univariate point process is through a counting process $N = (N_t)_{t \geq 0}$ with

\begin{equation*} N_t^T(w) = \sum_{k \geq 1} 1_{\{ T_k(w) \leq t \}},\end{equation*}

which is, for each realization w, a right-continuous step function with $N_0(w) = 0$ . As $(N_t)_{t \geq 0}$ and $(T_n)_{n \geq 0}$ carry the same information, the associated counting process is also called a point process.

The mathematical description of our observations is given by the internal family of sub- $\sigma$ -algebras of $\Im$ , denoted by $(\Im_t^T)_{t \geq 0}$ , where

\begin{equation*}\Im_t^T = \sigma \{1_{ \{ T_i >s \}} , i \geq 1, 0 < s < t \}\end{equation*}

satisfies the Dellacherie conditions of right-continuity and completeness.

For a mathematical basis for applied stochastic processes, one may refer to Aven and Jensen [Reference Aven and Jensen3]. In particular, an extended and positive random variable $\tau$ is an $\Im_t^T$ -stopping time if, and only if, $\{ \tau \leq t \} \in \Im_t^T$ , for all $t \geq 0$ ; an $\Im_t^T$ -stopping time $\tau$ is said to be predictable if an increasing sequence $(\tau_n)_{n \geq 0 }$ of $\Im_t^T$ -stopping times, $\tau_n < \tau $ , exists such that $\lim_{n\rightarrow \infty } \tau_n = \tau $ ; an $\Im_t^T$ -stopping time $\tau$ is totally inaccessible if $\mathbb{P}(\tau = \sigma < \infty) = 0$ for every predictable $\Im_t^T$ -stopping time $\sigma$ .

In what follows, we assume that relations between random variables and measurable sets always hold with probability one, which means that the term $\mathbb{P}$ -almost surely can be suppressed.

The point process $(N_t^{ {\bf T} })_{t \geq 0}$ is adapted to $ (\Im_t^{ {\bf T} } )_{t \geq 0}$ and $E[N_t^{ {\bf T} }| \Im_s^{ {\bf T} }] \geq N_s^{ {\bf T} } $ for $s < t$ ; that is, $N_t^{ {\bf T} }$ is an uniformly integrable $\Im_t^{ {\bf T} }$ -submartingale. Then, from the Doob–Meyer decomposition, there exists a unique right-continuous nondecreasing $\Im_t^{ {\bf T} }$ -predictable and integrable process $(A_t^{ {\bf T} })_{t \geq 0}$ , with $A_0^{ {\bf T} }= 0$ , such that $(M_t^{ {\bf T}})_{t \geq 0}$ , with $ N_t^{ {\bf T} } = A_t^{ {\bf T} } + M_t^{ {\bf T} }$ , is a uniformly integrable $\Im_t^{ {\bf T} }$ -martingale. In many cases, the $\Im_t^{ {\bf T} }$ -compensator, $(A_t^{ {\bf T} })_{t \geq 0}$ , of a counting process $(N_t^{ {\bf T} })_{t \geq 0}$ can be represented in the form of an integral as

\begin{equation*}A_t^{ {\bf T} } = \int_0^t \lambda_s^{ {\bf T} } ds\end{equation*}

for some non-negative ( $\Im_t^{ {\bf T} }$ -progressively measurable) stochastic process $(\lambda_t^{ {\bf T} })_{t \geq 0}$ , called the $\Im_t^{ {\bf T}}$ -intensity of $(N_t^{ {\bf T} })_{t \geq 0}$ .

The compensator process is expressed in terms of conditional probabilities, given the available information, and it generalizes the classical notion of hazard. Intuitively, it corresponds to whether the failure is going to occur now, on the basis of all observations available up to but not including the present time.

Following Aven and Jensen [Reference Aven and Jensen3], the compensator process is given by the following theorem.

Theorem 1. Let $(N_t^{ {\bf T} })_{t \geq 0}$ be an integrable point process and $(\Im_t^{ {\bf T} })_{t \geq 0}$ its internal history. Suppose that for each n there exists a regular conditional distribution of $T_{n+1}- T_n$ , given the past $ \Im_{T_n}^{ {\bf T} },$ of the form

\begin{equation*} G_n(w, A) = \mathbb{P}(T_{n+1} - T_n \in A| \Im_{T_n}^{ {\bf T} })(w) = \int_{A}g_n(w, s)ds,\end{equation*}

where $g_n(w, s)$ is a measurable function. Then the process given by $\lambda_ t^{{\bf T}} =\Sigma_{n=0}^{\infty} \lambda_t^n$ , where

\begin{equation*} \lambda_t^n = \frac{ g_n(t - T_n)}{G_n([t - T_n, \infty))}1_{\{ T_n < t \leq T_{n+1} \}} = \frac{ g_n(t - T_n)}{1 - \int_0^{t - T_n} g_n(s) ds} 1_{\{ T_n < t \leq T_{n+1} \}}, \end{equation*}

is called the $\Im_t^{ {\bf T} } $ -intensity of $N_t^{ {\bf T} }$ , and

\begin{equation*} N_t^{{\bf T}} - \int_0^t \lambda_s^{{\bf T}} ds \end{equation*}

is an $\Im_t^T$ -martingale.

We note that the compensator process

\begin{equation*} A_t^T = A_{T_n}^T + \int_0^{t - T_n} \lambda_s^n ds\end{equation*}

is defined by its parts.

Our aim now is to define a cumulative residual inaccuracy measure between two independent non-explosive point processes, T and S, and then to proceed to using a superposition process.

Definition 3. The superposition of two univariate point processes ${\bf T} = (T_n)_{n \geq 0}$ and ${\bf S} = (S_n)_{n \geq 0}$ , defined in a complete probability space $(\Omega, \Im, \mathbb{P})$ with compensator processes $(A_t)_{t \geq 0}$ and $(B_t)_{t \geq 0}$ , respectively, is the marked point process $(V_n, U_n)_{n \geq 1}$ , where ${\bf V} = (V_n)_{n \geq 0}$ , with $V_0 = 0$ , is a univariate point process and ${\bf U} = (U_n)_{n \geq 0}$ , the indicator process, is a sequence of random variables taking values in a measurable space $(\{0,1 \} ,\sigma(\{0,1 \} )$ , resulting from pooling together the time points of events occurring in each of the two separate point processes. Here 0 stands for an occurrence of the process T, $U_n = T_k$ for some k, in which case $V_n = \max_{1 \leq j \leq n}\{ (1-U_j)\cdot V_j \}$ , and 1 stands for an occurrence of the process S, $U_n = S_j$ for some j, in which case $V_n = \max_{1 \leq j \leq n}\{ U_j \cdot V_j \}$ .

Now, we define

\begin{equation*}N_t(0) = \sum_{n=1}^{\infty} 1_{ \{U_n =0\}} 1{\{ V_n \leq t\}}\end{equation*}

as the number of occurrences of the process T and

\begin{equation*}N_t(1) = \sum_{n=1}^{\infty} 1_{ \{ U_n =1\}} 1{\{ V_n \leq t\}}\end{equation*}

as the number of occurrences of the process S on the superposition process.

The observed history is thus

\begin{equation*}\Im_t^V = \sigma \{ N_s(0), N_s(1), 0 \leq s < t\} = \sigma\{ (V_n, U_n), V_n <t\}. \end{equation*}

Theorem 2. Let $(V_n, U_n)_{n \geq 1}$ be a marked point process, the superposition of two univariate point processes ${\bf T} = (T_n)_{n \geq 0}$ and ${\bf S} = (S_n)_{n \geq 0}$ , defined in a complete probability space $(\Omega, \Im, \mathbb{P})$ with $\Im_t^V $ -compensator processes $(A_t)_{t \geq 0}$ and $(B_t)_{t \geq 0}$ , respectively. Furthermore, let $N_t(0)$ and $N_t(1)$ be the $\Im_t$ -submartingales defined as the number of occurrences of the processes T and S, respectively. Then $N_t(0)$ has $\Im_t$ -compensator

\begin{equation*}\displaystyle\sum_{n=1}^{\infty} \int_0^t 1_{\{ U_n = 0 \}} dA_s^n, \end{equation*}

and $N_t(1)$ has $\Im_t$ -compensator

\begin{equation*}\displaystyle\sum_{n=1}^{\infty} \int_0^t 1_{\{ U_n = 1 \}} dB_s^n .\end{equation*}

Proof. To prove this theorem, we use the known result that the integration of an $\Im_t$ -predictable process with respect to an integrable $\Im_t$ -martingale of bounded variation is an $\Im_t$ -martingale.

Observe that the deterministic process

\begin{equation*} 1_{\{ U_n = 0 \}}(w,s) = 1_{\{ U_n = 0 \}}(w)\end{equation*}

is left-continuous and, therefore, $\Im_t$ -predictable, implying that

\begin{equation*} \int_0^t 1_{\{ U_n = 0 \}}(s) dM_s^n,\end{equation*}

where $ M_t^n = 1_{ \{ T_n \leq t\}} - A_t^n$ , is an $\Im_t$ -martingale.

As the sum of $\Im_t$ -martingales is an $\Im_t$ -martingale, we readily have that

\begin{equation*} \displaystyle \sum_{n=1}^{\infty} \int_0^t 1_{\{ U_n = 0 \}} dM_s^n =\displaystyle \sum_{n=1}^{\infty} \int_0^t 1_{\{ U_n = 0 \}} d1_{\{ T_n \leq s \}} -\displaystyle\sum_{n=1}^{\infty} \int_0^t 1_{\{ U_n = 0 \}} dA_s^n\end{equation*}

is an $\Im_t$ -martingale. As the compensator is unique, the proof is readily completed. The proof for the $N_t(1)$ process follows in an analogous manner.

Remark 3. In view of Theorem 2.2 and Definition 3, we observe, without loss of generality, the compensator definition modifications as

\begin{equation*} B_t = B_{S_n} + \int_0^{t-S_n} dB_s = B_{V_n^{{*}}} + \int_0^{t-V_n^{*}} dB_s, \ \ V_n < t \leq V_{n+1},\end{equation*}

where $V_n^{{*}} = \max_{1 \leq j < n} \{ U_j \cdot V_j \},$ and

\begin{equation*} A_t = A_{T_n} + \int_0^{t-T_n} dA_s = A_{V_n^{{*}}} + \int_0^{t-V_n^{*}} dA_s, \ \ V_n < t \leq V_{n+1},\end{equation*}

where $V_n^{{*}} = \max_{1 \leq j < n} \{ (1- U_j)\cdot V_j \}.$

In view of Definition 1, in the introduction, we define a cumulative inaccuracy measure between two univariate point processes at any $\Im_t$ -stopping time $\tau$ , in particular, at time t, as follows.

Definition 4. Let $ {\bf T} = (T_n)_{n \geq 0}$ and ${\bf S} = (S_n)_{n \geq 0}$ be univariate point processes with $\Im_t^V $ -compensator processes $(A_t)_{t \geq 0}$ and $(B_t)_{t \geq 0}$ , respectively, defined in a complete probability space $(\Omega, \Im, \mathbb{P})$ . Let $(V_n)_{n \geq 0}$ be their superposition process. Then the cumulative residual inaccuracy measure at time t between $ {\bf T}$ and $ {\bf S}$ is given by

\begin{equation*} CRI_t(N^{ {\bf T} }, N^{ {\bf S} }) = \mathbb{E}\left[ \int_0^t \displaystyle\sum_{n=1}^{\infty} \int_0^s 1_{\{ U_n = 0 \}} dA_u^n ds + \int_0^t \displaystyle\sum_{n=1}^{\infty} \int_0^s 1_{\{ U_n = 1 \}} dB_u^n ds \right].\end{equation*}

It is important to observe that, under Theorem 2.2, the indicator process is essential in Definition 4.

An interpretation of Definition 4 is given by the following theorem.

Theorem 3. Let $ {\bf T} = (T_n)_{n \geq 0}$ and ${\bf S} = (S_n)_{n \geq 0}$ be two non-explosive univariate point processes, and let ${\bf V} = (V_n)_{n \geq 0}$ be their superposition process. Then

\begin{equation*} \int_0^t A_s ds + \int_0^t B_s ds = \Sigma_{n=1}^{N_t^{{\bf V} }} [ 1_{ \{ U_n = 1 \}} + 1_{ \{ U_n = 0\}}] |V_n - V_{n-1}| = \Sigma_{n=1}^{N_t^{{\bf V} }} |V_n - V_{n-1}|,\end{equation*}

where

\begin{equation*} \{U_k = 1 \} = \bigcup_{j=1}^{\infty} \{ T_j \leq S_{k-1}\wedge t \} \bigcup\{ T_{j-1} < S_k \leq T_j \wedge t \},\end{equation*}
\begin{equation*} \{U_k = 0 \} = \bigcup_{k=1}^{\infty} \{S_k \leq T_{j-1}\wedge t \} \bigcup\{ S_{k-1} < T_j \leq S_k \wedge t \}.\end{equation*}

Proof. We let $(\tau_n^{ \bf T})_{n \geq 0}$ be an increasing sequence of $\Im_t$ -stopping times as the localizing sequence of the stopped martingale $( N_{t\wedge { \tau_n^{ \bf T}}}^{ \bf T)} - A_{t \wedge { \tau_n^{ \bf T} }} ) _{t \geq 0} $ , and let $(\tau_n^{ \bf S})_{n \geq 0}$ be an increasing sequence of $\Im_t$ -stopping times as the localizing sequence of the stopped martingale $( N_{t\wedge {\tau_n^{ \bf S} }}^{ \bf S} - B_{t \wedge {\tau_n^{ \bf S}}} ) _{t \geq 0}$ . We then apply the optimal sampling theorem; see Aven and Jensen [Reference Aven and Jensen3].

Note that $\tau_n = \tau_n^{ \bf T} \vee \tau_n^{ \bf S}$ is also an $\Im_t$ -stopping time and that the point process $(S_k)_{k \geq 0}$ defines a partition of $ \mathbb{\Re^+}$ , that is, $ \mathbb{\Re^+ }= \cup_{k=0}^{\infty} (S_{k-1}, S_k]$ . Therefore, we can write

\begin{equation*} \int_0^{\tau_n} A_s ds = \sum_{k=1}^{N_{\tau_n}^{{\bf T} }} \int_{S_{k-1}}^{S_k \wedge {\tau_n}} A_s ds = \sum_{k=1}^{N_{\tau_n}^{{\bf T}}} \int_{S_{k-1}}^{S_k \wedge {\tau_n}} \left( \int_0^s dA_u \right) ds \end{equation*}
\begin{equation*} = \sum_{k=1}^{N_{\tau_n}^{{\bf T} }} \left[ \int_0^{S_{k-1}} \left(\int_{S_{k-1}}^{S_k \wedge {\tau_n}} ds\right) dA_u + \int_{S_{k-1}}^{S_k \wedge {\tau_n}} \left( \int_u^{S_k \wedge t}ds\right) dA_u \right] \end{equation*}
\begin{equation*} = \sum_{k=1}^{N_{\tau_n}^{{\bf T} }} \left[ \int_0^{S_{k-1}} ({S_k \wedge {\tau_n}} - S_{k-1}) dA_u + \int_{S_{k-1}}^{S_k \wedge {\tau_n}} ( {S_k \wedge {\tau_n}} - u) dA_u \right].\end{equation*}

However, the compensator differential $dA_u$ is defined by parts and can be written as

\begin{equation*}dA_u = \sum_{j=1}^{\infty} 1{\{ T_{j-1} < u \leq T_j \}} dA_u(j),\end{equation*}

where $dA_u(j)$ is the differential compensator of $ 1_{\{ T_j \leq t\}}$ defined in $( T_{j-1}, T_j]$ , and 0 otherwise. It then follows that

\begin{equation*} \int_0^t A_s ds = \sum_{k=1}^{N_{\tau_n}^{{\bf T} }} \sum_{j=1}^{N_{\tau_n}^{{\bf S} }} \left[ \int_{T_{j-1}}^{S_{k-1} \wedge T_j} ({S_k \wedge {\tau_n}} - S_{k-1}) dA_u(j) + \int_{S_{k-1} \vee T_{j-1}}^{{S_k \wedge {\tau_n}} \wedge T_j} ( {S_k \wedge {\tau_n}} - u) dA_u(j) \right] \end{equation*}
\begin{equation*} = \sum_{k=1}^{N_{\tau_n}^{{\bf T} }} \sum_{j=1}^{N_{\tau_n}^{{\bf S} }} \left[ ({S_k \wedge {\tau_n}} - S_{k-1})1_{\{ T_j \leq S_{k-1} \wedge T_j \}} + ( {S_k \wedge {\tau_n}} - T_j)1_{\{S_{k-1} \vee T_{j-1} < T_j \leq {S_k \wedge {\tau_n}} \wedge T_j \}} \right] \end{equation*}
\begin{equation*} = \sum_{k=1}^{N_{\tau_n}^{{\bf T} }} \sum_{j=1}^{N_{\tau_n}^{{\bf S} }} [ ({S_k \wedge t} - S_{k-1})1_{\{ T_j \leq S_{k-1}\}} + ( {S_k \wedge {\tau_n}} - T_j) 1_{\{ S_{k-1} < T_j \leq {S_k \wedge {\tau_n}}\}} ] .\end{equation*}

Using similar arguments we can prove that

\begin{equation*} \int_0^{\tau_n} B_s ds = \sum_{k=1}^{N_{\tau_n}^{{\bf T} }} \sum_{j=1}^{N_{\tau_n}^{{\bf S} }} [ (T_j\wedge {\tau_n} - T_{j-1})1_{\{ S_k \leq T_{j-1}\}} + ( T_j \wedge {\tau_n} - S_k \wedge t) 1_{\{ T_{j-1} < S_k \leq T_j \wedge {\tau_n}\}} ] .\end{equation*}

Hence we have

\begin{equation*}\int_0^{\tau_n} A_s ds + \int_0^{\tau_n} B_s ds \end{equation*}
\begin{equation*} = \sum_{k=1}^{N_{\tau_n}^{{\bf T} }} \sum_{j=1}^{N_{\tau_n}^{{\bf S} }} [ ({S_k \wedge t} - S_{k-1})1_{\{ T_j \leq S_{k-1}\}} + ( {S_k \wedge {\tau_n}} - T_j) 1_{\{ S_{k-1} < T_j \leq {S_k \wedge {\tau_n}}\}} ] \end{equation*}
\begin{equation*} + \sum_{k=1}^{N_{\tau_n}^{{\bf T} }} \sum_{j=1}^{N_{\tau_n}^{{\bf S} }} [ (T_j\wedge {\tau_n} - T_{j-1})1_{\{ S_k \leq T_{j-1}\}} + ( T_j \wedge {\tau_n} - S_k \wedge t) 1_{\{ T_{j-1} < S_k \leq T_j \wedge {\tau_n}\}} ] \end{equation*}
\begin{equation*} = \Sigma_{n=1}^{N_{\tau_n}^{{\bf V} }} [ 1_{ \{ U_n = 1 \}} + 1_{ \{ U_n = 0\}}]|V_n - V_{n-1}| = \Sigma_{n=1}^{N_{\tau_n}^{{\bf V} }} |V_n - V_{n-1}|,\end{equation*}

which is the sum of the inter-arrival times of the superposition process, where

\begin{equation*} \{U_k = 1 \} = \bigcup_{j=1}^{\infty} \{ T_j \leq S_{k-1}\wedge {\tau_n} \} \bigcup\{ T_{j-1} < S_k \leq T_j \wedge {\tau_n}\},\end{equation*}
\begin{equation*} \{U_k = 0 \} = \bigcup_{k=1}^{\infty} \{S_k \leq T_{j-1}\wedge {\tau_n} \} \bigcup\{ S_{k-1} < T_j \leq S_k \wedge {\tau_n} \}.\end{equation*}

As $( N_{t\wedge \tau}^{\bf V}) _{t \geq 0} $ is uniformly integrable, we let $\lim_{n \rightarrow \infty} \tau_n = \infty$ , and provided that we identify random variables that are equal almost everywhere, the quantity $CRI_{\infty}(N^{ {\bf T} }, N^{ {\bf S} }) = \Sigma_{k=1}^{\infty} |V_k - V_{k-1}| $ , as t goes to infinity, can be seen as a dispersion measure in the $L^1$ space of sequences of random variables when using the point process S, which represents the information asserted by the experimenter about the true point process ${ \bf T}$ .

2.2. Application to minimal repair point processes

A repair is minimal if the intensity $\lambda_t^T$ is not affected by the occurrence of failures, or, in other words, if we cannot determine the failure time points from observation of $\lambda_t^T$ . Formally, we have the following definition.

Definition 5. Let $T = (T_n)_{n \geq 0}$ be a univariate point process with an integrable point process $N^T$ and corresponding $\Im_t$ -intensity $(\lambda_t^T)_{t \geq 0}.$ Let $ \Im_t^{\lambda^T} = \sigma( \lambda_s^T, 0 \leq s \leq t) $ be the filtration generated by $\lambda^T$ . Then the point process T is said to be a minimal repair process (MRP) if none of the variables $T_n$ , $n \geq 0$ , for which $P(T_n < \infty) > 0$ is an $\Im_t^{\lambda^T}$ -stopping time.

If T is a non-homogeneous Poisson process, $\lambda_t = \lambda(t)$ is a time-dependent deterministic function, and this means that the age does not get changed as the result of a failure. Here, $ \Im_t^{\lambda^T} = \{ \Omega, \emptyset \}$ for all $ t \in \mathbb{R^+}$ , and the failure times $T_n$ are not $\Im_t^{\lambda^T}$ -stopping times.

Example 1. Let $ (T_n)_{n \geq 0 }$ be a Weibull process with parameters $\beta$ and $\theta_1$ . Let $ (S_n)_{n \geq 0 }$ be a Weibull process with parameters $\beta$ and $\theta_2$ asserted by the experimenter.

In practice, we consider the ordered lifetimes $T_1, \ldots ,T_n $ with a conditional reliability function given by

\begin{equation*} \overline{G_i}(t_i|t_1,\ldots ,t_{i-1}) = \exp{ \left[{-} \left( \frac{t_i}{\theta_1}\right)^{\beta} + \left( \frac{t_{i-1}}{\theta_1}\right)^{\beta} \right] } \end{equation*}

for $0 \leq t_{i-1} < t_i$ , where the $t_i$ are the ordered observations. The $\Im^T$ -compensator process is then

\begin{equation*}A_t = \sum_{j=1}^n \left[\left(\frac{t_j}{\theta_1}\right)^{\beta} - \left(\frac{t_{j-1}}{\theta_1}\right)^{\beta} \right] + \left[\left(\frac{t}{\theta_1}\right)^{\beta} - \left(\frac{t_n}{\theta_1}\right)^{\beta} \right] = \left(\frac{t}{\theta_1}\right)^{\beta}, \ \ t_n \leq t < t_{n+1}.\end{equation*}

Furthermore, with respect to $ (S_n)_{n \geq 0 }$ , the $\Im^S$ -compensator process is

\begin{equation*} B_t = \sum_{j=1}^n \left[ \left(\frac{s_j}{\theta_2}\right)^{\beta} - \left( \frac{s_{j-1}}{\theta_2}\right)^{\beta} \right] + \left[\left(\frac{t}{\theta_2}\right)^{\beta} - \left(\frac{s_n}{\theta_2}\right)^{\beta} \right] = \left( \frac{t}{\theta_2}\right)^{\beta} , \ \ s_n \leq t < s_{n+1}.\end{equation*}

Note that the compensator process in a minimal repair point process is independent of the occurrence times, in which case the indicator process does not apply. Therefore, the cumulative inaccuracy measure at time t is given by

\begin{equation*} CRI_t(N^T, N^S) = \mathbb{E}\left[ \int_0^t A_s ds + \int_0^t B_s ds \right] = \mathbb{E}\left[ \int_0^t \left(\frac{s}{\theta_1}\right)^{\beta} ds + \int_0^t \left(\frac{s}{\theta_2}\right)^{\beta} ds \right] \end{equation*}
\begin{equation*} = \frac{t^{\beta+1}}{\beta+1} \left(\frac{\theta_1^{\beta} + \theta_2^{\beta}}{\theta_1^{\beta} \theta_2^{\beta}}\right).\end{equation*}

Example 2. (Application to a coherent system minimally repaired at component level.) Suppose we observe, as in Barlow and Proschan [Reference Barlow and Proschan4], the lifetimes of a system with three components, $U_1$ , $U_2$ , and $U_3$ , which are independent and identically exponentially distributed with parameter $\lambda$ , through the filtration

\begin{equation*} \Im_t = \sigma \{ 1_{\{U_1>s\}}, 1_{\{U_2>s\}}, 1_{\{U_3>s\}} , 0 \leq s \leq t \}.\end{equation*}

The system with lifetime $T_1 = U_1 \wedge ( U_2 \vee U_3 )$ has intensity $\lambda_t^{T_1} = \lambda+ \lambda 1_{\{ U_2 \wedge U_3 \leq t \}}$ , and clearly $T_1$ is not an $\Im_t^{\lambda^{T_1}}$ -stopping time.

At system failure $T_1$ , the component that causes the system to fail is repaired minimally. As the component lifetimes are independent and identically distributed, the additional lifetime, given by the lifetime $U_4$ , is independent of and distributed identically to $U_1$ , $U_2$ , and $U_3$ , and the repaired system then has lifetime $T_2 = T_1 + U_4.$

We allow for repeated minimal repairs considering a sequence of random variables, $(U_n)_{n \geq 1}$ , that are independent and identically exponentially distributed with parameter $\lambda$ . Then $ T_1 = U_1 \wedge (U_2 \vee U_3) $ and $ T_{n+1} = T_n + U_{n+3},$ $ n \geq 2$ , successively, constituting a minimal repair point process with compensator

\begin{equation*}A_t = \int_0^t\lambda_s^T ds = \int_0^t \left[ \lambda + \lambda 1_{\{ U_2 \wedge U_3 \leq s \}}\right] ds = 2 \lambda t - \lambda ( U_2 \wedge U_3 ) \ \ \text{if} \ \ t \leq T, \end{equation*}

where T is the actual system lifetime.

Let us now consider another minimally repaired coherent system, $S_1$ , asserted by the experimenter, with the same structure function, but component lifetimes $V_1$ , $V_2$ , and $V_3$ , which are independent and identically exponentially distributed with parameter $\lambda^*$ and compensator process

\begin{equation*}B_t = \int_0^t\lambda_s^S ds = 2 \lambda^* t - \lambda^* ( V_2 \wedge V_3 ) \ \ \text{if} \ \ t \leq S, \end{equation*}

where S is the actual system lifetime.

Then, in the set $\{ t \leq T \wedge S \}$ , the cumulative inaccuracy measure at time t is given by

\begin{equation*} CRI_t(N^T, N^S)= \mathbb{E}\left[ \int_0^t A_s ds + \int_0^t B_s ds \right] = \end{equation*}
\begin{equation*} \mathbb{E}\left[ \int_0^t [2 \lambda s - \lambda ( U_2 \wedge U_3 ) ds + \int_0^t [ 2 \lambda^* s - \lambda^* ( V_2 \wedge V_3 )] ds\right] \end{equation*}
\begin{equation*} = \lambda t^2 + \lambda t E[ U_2 \wedge U_3] + \lambda^* t^2 + \lambda t E[ V_2 \wedge V_3] = (\lambda + \lambda^*) t^2 - t. \end{equation*}

Clearly, the expression for $ CRI_t(N^T, N^S)$ can be negative. However, we observe that, always, the superposition process is minimally repaired with an exponential lifetime with mean $\frac{1}{\lambda + \lambda^*}$ . We then consider $t \geq \frac{1}{\lambda + \lambda^*}$ , resulting in a positive $ CRI_t(N^T, N^S)$ .

Also, in the minimally repaired coherent system, the compensator process is independent of occurrence times, in which case the indicator process does not apply.

3. Inaccuracy measure between point processes related to Markov chains

3.1. Inaccuracy measure between occurrence times in Markov chains

Let $(X_t)_{t \geq 0}$ be an E-valued process defined in a probability space $(\Omega,\Im, \mathbb{P})$ and adapted to some history $(\Im_t)_{t \geq 0}$ . The observations are through its internal history

\begin{equation*}\Im_t^X = \sigma\{ X_s, s \leq t\}\end{equation*}

for all $t \geq 0$ , and $ \Im_t^X \subseteq \Im_t$ for all $t \geq 0$ . Then $\Im_{\infty}^X $ records all the events linked to the process $(X_t)_{t \geq 0}$ .

The process $(X_t)_{t \geq 0}$ is said to be an $\Im_t$ -Markov process if, and only if, for all $t \geq 0$ , $ \sigma(X_s, s > t)$ and $\Im_t$ are independent, given $ X_t$ . In particular, if E is the set of natural numbers ${\bf N^+}$ , we call $(X_t)_{t \geq 0}$ an $\Im_t$ -Markov chain. In what follows, we consider a right-continuous Markov chain with left limits.

The $\Im_t$ -Markov chain is associated with a sequence of its sojourn times $ (T_{n+1} - T_n)_{n \geq 0}$ , with $ T_0 = 0$ , and its infinitesimal characteristics $Q = [q_{i,j}]$ , $(i,j) \in {\bf N^+} \times {\bf N^+}$ . If, for each natural number i, we have

\begin{equation*}q_i = \sum_{j \neq i} q_{i,j} < \infty,\end{equation*}

then the chain is said to be stable and conservative. We set $q_{i,i} = - q_i$ .

The interpretation of the terms $q_i$ and $q_{i,j}$ is as follows:

\begin{equation*}\mathbb{P}(T_{n+1} - T_n > t | \Im_{T_n}^X) = e^{-t q_{X_{T_n}}}, \ \ t > 0,\end{equation*}
\begin{equation*}\mathbb{P}(X_{T_{n+1}} = i, T_{n+1} - T_n > t | \Im_{T_n}^X) = e^{-t q_{X_{T_n}}}\left( \frac{q_{X_{T_n},i}}{q_{X_{T_n}}}\right), \ \ t > 0.\end{equation*}

We are now interested in the cumulative inaccuracy process between point processes related to Markov chain occurrence times, with

  • $N_t^X(k, l)$ denoting the number of transitions from state k to state l in the interval (0, t],

  • $N_t^X(l)$ denoting the number of transitions into state l during the interval (0, t], and

  • $N_t^X$ denoting the number of transitions in (0, t].

Now, the occurrence observation times are through the internal family of sub- $\sigma$ -algebras of  $\Im$ ,

\begin{equation*}\Im_t^{\bf T} = \sigma \{ N_s^{\bf T}, 0 \leq s < t \} \subseteq \Im_t^X \subseteq \Im_t, \end{equation*}

and satisfy the Dellacherie conditions of right-continuity and completeness. This family of counts leads to the same information as the sequence of occurrence times $ {\bf T}$ and hence provides an equivalent point process description. Clearly, from the sojourn times interpretation, the $ T_n,$ $n > 0$ , are totally inaccessible $\Im_t$ -stopping times. In a certain way, absolutely continuous lifetimes are totally inaccessible $\Im_t^{ {\bf T} }$ -stopping times.

Let $(X_t)_{t \geq 0}$ be a right-continuous ${\bf N^+}$ -valued $\Im_t$ -Markov chain with left-hand limits which is stable and conservative, with associated sequence of occurrence times ${\bf T} = (T_n)_{n \geq 1}$ and matrix of infinitesimal characteristics $ Q = [q_{i,j}], i, j \in {\bf N^+} $ . Then a martingale property of this process is given in the following theorem.

Theorem 4. Let $(X_t)_{t \geq 0}$ be a right-continuous ${\bf N^+}$ -valued $\Im_t$ -Markov chain with left-hand limits, which is stable and conservative with associated matrix $ Q = [q_{i,j}], i, j \in {\bf N^+}$ . Let f be a non-negative function from ${\bf N^+} \times {\bf N^+}$ to $\Re_+$ , with

\begin{equation*}\mathbb{E}\left[\int_0^t \sum_{j \neq X_u}q_{X_u,j}|f(X_u,j)| du \right] < \infty.\end{equation*}

Then

\begin{equation*} \sum_{0 < u \leq t} f(X_{u^-},X_u)- \int_0^t \sum_{j \neq X_u}q_{X_u,j}f(X_u,j) du \end{equation*}

is an $\Im_t$ -martingale.

Proof. The argument to be used is the Lévy formula, which states that if f is a non-negative function from $ {\bf N^+} \times {\bf N^+}$ to $\mathbb{R}^+$ , then for any $0 \leq s \leq t$ we have

\begin{equation*} \mathbb{E}\left[ \sum_{s< u \leq t} f(X_{u^-},X_u) | \sigma(X_s)\right] = \mathbb{E}\left[\int_s^t \sum_{j \neq X_u}q_{X_u,j}f(X_u,j) du | \sigma(X_s)\right].\end{equation*}

By the $\Im_t$ -Markov property of $X_t$ , i.e., the conditional independence of $\Im_s$ and $(X_u, u > s)$ given $X_s$ , conditioning on $\sigma(X_s)$ in the above equation becomes equivalent to conditioning with respect to $\Im_s$ , and so

\begin{equation*} \mathbb{E}\left[ \sum_{s< u \leq t} f(X_{u^-},X_u) | \Im_s\right] = \mathbb{E}\left[\int_s^t \sum_{j \neq X_u}q_{X_u,j}f(X_u,j) du | \Im_s \right].\end{equation*}

Now, if $f\,:\, {\bf N^+} \times {\bf N^+} \rightarrow \mathbb{R}^+$ is such that

\begin{equation*}\mathbb{E}\left[\int_0^t \sum_{j \neq X_u}q_{X_u,j}|f(X_u,j)| du\right] < \infty,\end{equation*}

we have

\begin{equation*} \mathbb{E}\left[ \sum_{0 < u \leq t} f(X_{u^-},X_u) | \Im_s\right] - \mathbb{E}\left[ \sum_{0 < u \leq s} f(X_{u^-},X_u) | \Im_s \right] \end{equation*}
\begin{equation*} = \mathbb{E}\left[ \sum_{s< u \leq t} f(X_{u^-},X_u) | \Im_s\right] = \mathbb{E}\left[\int_s^t \sum_{j \neq X_u}q_{X_u,j}f(X_u,j) du | \Im_s \right] \end{equation*}
\begin{equation*} = \mathbb{E}\left[\int_0^t \sum_{j \neq X_u}q_{X_u,j}f(X_u,j) du | \Im_s\right] - \mathbb{E}\left[\int_0^s \sum_{j \neq X_u}q_{X_u,j}f(X_u,j) du | \Im_s\right].\end{equation*}

As $ \int_0^s \sum_{j \neq X_u}q_{X_u,j}f(X_u,j) du $ is $\Im_s$ -measurable, we can conclude that

\begin{equation*} \mathbb{E}\left[ \sum_{0 < u \leq t} f(X_{u^-},X_u)- \int_0^t \sum_{j \neq X_u}q_{X_u,j}f(X_u,j) du | \Im_s\right] \end{equation*}
\begin{equation*} = \sum_{0 < u \leq s} f(X_{u^-},X_u)- \int_0^s \sum_{j \neq X_u}q_{X_u,j}f(X_u,j) du;\end{equation*}

that is,

\begin{equation*} \sum_{0 < u \leq t} f(X_{u^-},X_u)- \int_0^t \sum_{j \neq X_u}q_{X_u,j}f(X_u,j) du \end{equation*}

is an $\Im_t$ -martingale.

Example 3. Let $(X_t)_{t \geq 0}$ be a birth-and-death process with parameters $\lambda_n$ and $\mu_n$ , such that $E[X_0] < \infty$ . If $\sum_{n=0}^{\infty} \int_0^t \lambda_n \mathbb{P}(X_s = n) ds < \infty $ , the number of upward jumps in (0, t] is $N_t^+ = \sum_{n=0}^{\infty} N_t(n,n+1)$ .

From Theorem 3.1, we have that

\begin{equation*}N_t(n,n+1)-\lambda_n \int_0^t 1_{\{ X_s = n\}}ds \end{equation*}

is an $\Im_t$ -martingale.

As $\sum_{n=0}^{\infty} \int_0^t \lambda_n \mathbb{P}(X_s = n) ds < \infty $ , we have that

\begin{equation*} \sum_{n=0}^{\infty} N_t(n,n+1) - \sum_{n=0}^{\infty} \lambda_n \int_0^t 1_{\{ X_s = n\}}ds\end{equation*}

is an $\Im_t$ -martingale.

It then follows from Definition 4 that the cumulative inaccuracy measure at time t between $ N_t^+$ and $N_t^{+*} $ is

\begin{equation*} CRI_t( N_t^+), N_t^{+*}) = E\left[ \sum_{n=0}^{\infty} \int_0^t \left( \lambda_n \int_0^s 1_{\{U_{X_u} = 0 \}} 1_{\{ X_u = n\}}du \right) ds \right] \end{equation*}
\begin{equation*} + E\left[\sum_{n=0}^{\infty} \int_0^t \lambda_n^* \left( \int_0^s 1_{\{U_{X_u} = 1 \}} 1_{\{ X_u = n\}}du \right) ds \right].\end{equation*}

Furthermore, the number of downward jumps in (0, t] is $N_t^- = \sum_{n=0}^{\infty} N_t(n,n-1)$ , associated with the $\Im_t$ -martingale

\begin{equation*} N_t(n,n-1) - \mu_n \int_0^t 1_{\{ X_s = n \}} ds.\end{equation*}

As $\mathbb{E}[X_0] < \infty$ and $\mathbb{E}[ \sum_{n=0}^{\infty} N_t(n,n+1)] < \infty,$ we have

\begin{equation*}\mathbb{E}\left[ \sum_{n=1}^{\infty} N_t(n, n-1)\right] \leq \mathbb{E}\left[ \sum_{n=0}^{\infty} N_t(n,n+1)\right] + \mathbb{E}[X_0] < \infty. \end{equation*}

So we have

\begin{equation*} \sum_{n=1}^{\infty} N_t(n, n-1) - \sum_{n=1}^{\infty} \mu_n \int_0^t 1_{\{ X_s = n \}} ds \end{equation*}

as an $\Im_t$ -martingale.

It then follows from the definition that the cumulative inaccuracy measure at time t between $ N_t^-$ and $N_t^{-*}$ is

\begin{equation*} CRI_t( N_t^-), N_t^{-*}) = \mathbb{E}\left[ \sum_{n=1}^{\infty} \int_0^t \mu_n \left( \int_0^s 1_{\{U_{X_u} = 0 \}} 1_{\{ X_u = n \}} du \right) ds \right] \end{equation*}
\begin{equation*} + \mathbb{E}\left[\sum_{n=1}^{\infty} \int_0^t\sum_{n=1}^{\infty} \mu_n^* \left( \int_0^s 1_{\{U_{X_u} = 1 \}} 1_{\{ X_u = n \}} du \right) ds \right].\end{equation*}

The cumulative inaccuracy measure at time t of the birth-and-death process is

\begin{equation*} CRI_t( N_t, N_t^*) = CRI_t( N_t^+, N_t^{+*}) + CRI_t( N_t^-, N_t^{-*}).\end{equation*}

A birth-and-death process is a pure birth process if $\mu_n = 0$ for all n. If the birth rate $\lambda_n = \lambda$ , for all n, the pure birth process is simply a Poisson process.

As above, we have that

\begin{equation*}N_t = \sum_{n=0}^{\infty} N_t(n,n+1) - \sum_{n=0}^{\infty} \lambda\int_0^t 1_{\{ X_s = n\}}ds\end{equation*}

is an $\Im_t$ -martingale.

To evaluate the cumulative inaccuracy measure at time t between the counting process $ N_t^*$ related to the Markov chain $(X^*_t)_{t \geq 0}$ with parameter $\lambda^*$ —which represents the experimenter’s information about the true Markov chain $(X_t)_{t \geq 0}$ having parameter $ \lambda$ —and the counting process $ N_t$ which corresponds to the latter, using Definition 4 we obtain

\begin{align*}& CRI_t( N_t, N_t^*) \\& \qquad = \mathbb{E}\left[ \sum_{n=0}^{\infty} \int_0^t \left( \int_0^s 1_{ \{U_n = 0 \}} \lambda 1_{\{ X_u = n \}} du \right) ds\right] +\mathbb{E}\left[ \sum_{n=0}^{\infty} \int_0^t \left( \int_0^s 1_{ \{U_n = 1 \}} \lambda^* 1_{\{ X_u = n \}} du \right) ds\right] \\ & \qquad = \sum_{n=0}^{\infty} \int_0^t\left( \int_0^s \frac{\lambda}{\lambda + \lambda^* } \lambda \mathbb{P}(X_u = n) du \right) ds +\sum_{n=0}^{\infty} \int_0^t\left( \int_0^s \frac{\lambda^*}{\lambda + \lambda^* } \lambda^* \mathbb{P}(X_u = n) du \right) ds \\& \qquad = \sum_{n=0}^{\infty} \int_0^t \left( \int_0^s \frac{\lambda}{\lambda + \lambda^*} \lambda e^{-\lambda u} \frac{(\lambda u)^n}{n!} du \right) ds + \sum_{n=0}^{\infty} \int_0^t \left( \int_0^s \frac{\lambda*}{\lambda + \lambda^*} \lambda^* e^{-\lambda^* u} \frac{(\lambda^* u)^n}{n!} du\right) ds. \end{align*}

Remark 4. From the above discussion on the Poisson process example, we can observe the importance and essence of the indicator process in Definition 4.

Example 4. If in particular, in Theorem 3.1, we choose f to be $f(i,j) = 1_{\{ i=k \}} 1_{\{ j=l \}},$ then

\begin{equation*} \sum_{0 < u \leq t}f(X_{u_-},X_u) = N_t(k,l)\end{equation*}

is the number of transitions from state k to state l in the interval $(0, t].$ The $\Im_t$ -compensator of $N_t(k,l)$ is $ q_{k,l} \int_0^t 1_{\{ X_u =k\}}du$ , and

\begin{equation*}\mathbb{E}\left[q_{k,l}\int_0^t 1{\{ X_u =k\}}du\right] = q_{k, l} \int_0^t \mathbb{P}(X_u = k) du \leq q_{k,l} t < \infty,\end{equation*}

by hypothesis.

To evaluate the cumulative inaccuracy measure at time t between the counting process $ N_t^*(k,l)$ related to the Markov chain $(X^*_t)_{t \geq 0}$ with infinitesimal matrix $ Q^* = [q_{i,j}^*]$ , $i, j \in {\bf N^+} $ —which represents the experimenter’s information about the true Markov chain $(X_t)_{t \geq 0}$ having infinitesimal matrix $ Q = [q_{i,j}]$ , $i, j \in {\bf N^+}$ —and the counting process $ N_t(k,l)$ which corresponds to the latter, using Definition 4 we obtain

\begin{equation*} CRI_t(N(k,l), N(k,l)^*) = \mathbb{E}\left[ \int_0^t q_{k,l} \left( \int_0^s 1_{\{U_k = 0 \}} 1_{\{ X_u =k\}}du \right) ds\right] \end{equation*}
\begin{equation*} + \mathbb{E}\left[ \int_0^t q_{k,l}^* \left( \int_0^s 1_{\{U_k = 1 \}} 1_{\{ X_u =k\}}du \right) ds \right].\end{equation*}

Once again, as in Remark 2, we observe the importance and essence of the indicator process in Definition 4.

Example 5. Another interesting choice of f is given by $f(i,j) = 1_{\{ j=l\}}$ , in which case

\begin{equation*} \sum_{0 < u \leq t}f(X_{u_-},X_u) = N_t(l)\end{equation*}

is the number of transitions into state l during the interval $(0, t].$ The $\Im_t$ -compensator of $N_t(l)$  is

\begin{equation*}\int_0^t \sum_{i\neq l} q_{i,l} 1_{\{ X_u = i \}} du,\end{equation*}

provided

\begin{equation*}\sum_{i\neq l}\int_0^t q_{i,l} 1_{\{ X_u = i\}} du < \infty. \end{equation*}

To evaluate the cumulative inaccuracy measure at time t between the counting process $ N_t^*(l)$ related to the Markov chain $(X^*_t)_{t \geq 0}$ with infinitesimal matrix $ Q^* = [q_{i,j}^*], i, j \in {\bf N^+}$ , which represents the experimenter’s information about the true Markov chain $(X_t)_{t \geq 0}$ that has infinitesimal matrix $ Q = [q_{i,j}]$ , $i, j \in {\bf N^+}$ , and the counting process $ N_t(l)$ which corresponds to the latter, using Definition 4 we have

\begin{equation*} CRI_t(N(l), N^*(l)) = \mathbb{E}\left[ \int_0^t \left( \int_0^s \sum_{i\neq l} 1_{\{U_i = 0 \}} q_{i,l} 1_{\{ X_u = i \}} du \right) ds\right] \end{equation*}
\begin{equation*}+ \mathbb{E}\left[\int_0^t \left( \int_0^s \sum_{i\neq l} 1_{\{U_i = 1 \}} q_{i,l}^* 1_{\{ X_u = i \}} du \right) ds \right].\end{equation*}

Here again, as in Remark 2, we observe the importance and essence of the indicator process in Definition 4.

3.2. Markov chain characterization through cumulative inaccuracy measure for a proportional risk process

A Markov chain characterization through the cumulative inaccuracy measure for a proportional risk process can be obtained as follows.

Let $(X_t)_{t \geq 0}$ be a right-continuous ${\bf N^+}$ -valued $\Im_t$ -Markov chain that is stable and conservative, with left-hand limits, associated occurrence times ${ \bf T} = (T_n)_{n \geq 0} $ with $T_0 = 0$ , and matrix of infinitesimal characteristics $Q = [q_{i,j}]$ , $(i,j) \in {\bf N^+} \times {\bf N^+}$ . Let the sequence of occurrence times ${ \bf S} = (S_n)_{n \geq 0}$ , with $S_0 = 0$ , and the matrix of infinitesimal characteristics $Q^* = [q_{i,j}^*]$ , $(i,j) \in {\bf N^+} \times {\bf N^+},$ be asserted as information about the true Markov chain. We say that T and S satisfy the proportional risk hazard process if $ q_{i,j}^* = \alpha q_{i,j}$ , $0 < \alpha \leq 1,$ for all $ (i,j) \in {\bf N^+} \times {\bf N^+}$ .

Theorem 5. If T and ${\bf S } $ satisfy the proportional risk hazard process, then the cumulative residual inaccuracy measure at time t, $ CRI_t(N^{{\bf T}}, N^{{\bf S}}), $ uniquely determines the process $(X_t)_{t \geq 0}$ .

Proof. Let $(X_t)_{t \geq 0}$ be a right-continuous ${\bf N }_+ $ -valued $\Im_t$ -Markov chain with associated occurrence times ${ \bf T}^k$ and infinitesimal characteristics $Q^k = [q_{i,j}^k]$ , $(i,j) \in {\bf N^+} \times {\bf N^+}$ , and let ${ \bf S}^k$ be the occurrence times with infinitesimal characteristics $Q^{k*} = [q_{i,j}^{k*}]$ , $(i,j) \in {\bf N^+} \times {\bf N^+}$ , for $k=1,2.$ We then have

\begin{equation*} CRI_t(N^1, N^{1*}) = CRI_t(N^2, N^{2*}) \end{equation*}
\begin{equation*} \leftrightarrow\mathbb{E}\left[\int_0^t \int_0^s \sum_{j \neq X_u} 1_{\{U_{X_u} = 0 \}} q_{X_u,j}^1 f(X_u,j) du ds \right] \end{equation*}
\begin{equation*} + \mathbb{E}\left[ \int_0^t \int_0^s \sum_{j \neq X_u} 1_{\{U_{X_u} = 1 \}} \alpha^1 q_{X_u,j}^{1* } f(X_u,j) du ds \right] \end{equation*}
\begin{equation*} = \mathbb{E}\left[\int_0^t \int_0^s \sum_{j \neq X_u} 1_{\{U_{X_u} = 0 \}} q_{X_u,j}^2 f(X_u,j) du ds\right] \end{equation*}
\begin{equation*} + \mathbb{E}\left[ \int_0^t \int_0^s \sum_{j \neq X_u} 1_{\{U_{X_u} = 1 \}} \alpha^2 q_{X_u,j}^{2* } f(X_u,j) du ds\right].\end{equation*}

However, as the infinitesimal characteristics $q_{X_{T_n^k},X_{T_{n+1}^k}}^k$ are associated with occurrence times ${\bf T}^k$ , we have

\begin{equation*}\sum_{n}q_{X_{T_n^k},X_{T_{n+1}^k}}^k = 1,\end{equation*}

implying that

\begin{equation*} \sum_{n}q_{X_{S_n^k},X_{S_{n+1}^{k*}}} =0 ,\end{equation*}

and

\begin{equation*}q_{X_{S_n^k},X_{S_{n+1}^k}}^k =0 \end{equation*}

for all n and $k=1,2$ ; that is, the process does not jump at time $S_n$ :

\begin{equation*} \int_0^t \int_0^s \sum_{j \neq X_u} 1_{\{U_{X_u} = 1 \}} \alpha^k q_{X_u,j}^k f(X_u,j) du ds] = 0, \ \ k=1,2. \end{equation*}

Hence, the result follows from the equivalence equations given by

\begin{equation*} CRI_t(N^1, N^{1*}) = CRI_t(N^2, N^{2*}) \end{equation*}
\begin{equation*} \leftrightarrow\mathbb{E}\left[\int_0^t \int_0^s \sum_{j \neq X_u} 1_{\{U_{X_u} = 0 \}} q_{X_u,j}^1 f(X_u,j) du ds \right] \end{equation*}
\begin{equation*} = \mathbb{E}\left[\int_0^t \int_0^s \sum_{j \neq X_u} 1_{\{U_{X_u} = 0 \}} q_{X_u,j}^2 f(X_u,j) du ds\right] \end{equation*}
\begin{equation*} \leftrightarrow \mathbb{E}\left[\int_0^t \int_0^s \sum_{j \neq X_u} 1_{\{U_{X_u} = 0 \}} [q_{X_u,j}^1 - q_{X_u,j}^2] f(X_u,j) du ds \right] = 0 \end{equation*}

\begin{equation*} \leftrightarrow \mathbb{E}\left[ \int_0^t \int_0^s \sum_{j \neq X_u} 1_{\{U_{X_u} = 0 \}} [q_{X_u,j}^1 - q_{X_u,j}^2]( 1_{\{q_{X_u,j}^1 > q_{X_u,j}^2 \}}\right] \end{equation*}
\begin{equation*} + \mathbb{E}\left[ 1_{\{q_{X_u,j}^1 \leq q_{X_u,j}^2 \}})f(X_u,j) du ds \right] = 0 \leftrightarrow \end{equation*}
\begin{equation*} \mathbb{E}\left[ \int_0^t \int_0^s \sum_{j \neq X_u} 1_{\{U_{X_u} = 0 \}} [q_{X_u,j}^1 - q_{X_u,j}^2] 1_{\{q_{X_u,j}^1 > q_{X_u,j}^2 \}} f(X_u,j) du ds \right] = \end{equation*}
\begin{equation*}\mathbb{E}\left[\int_0^t \int_0^s \sum_{j \neq X_u} 1_{\{U_{X_u} = 0 \}} [q_{X_u,j}^2 - q_{X_u,j}^1] 1_{\{q_{X_u,j}^1 \leq q_{X_u,j}^2 \}} f(X_u,j) du ds\right] \end{equation*}
\begin{equation*} \leftrightarrow\mathbb{E}\left[\int_0^t \int_0^s \sum_{j \neq X_u} 1_{\{U_{X_u} = 0 \}} |q_{X_u,j}^1 - q_{X_u,j}^2| ( 1_{\{q_{X_u,j}^1 > q_{X_u,j}^2 \}} - 1_{\{q_{X_u,j}^1 \leq q_{X_u,j}^2 \}}) f(X_u,j) du ds \right]= 0. \end{equation*}

As $ \{q_{X_u,j}^1 > q_{X_u,j}^2 \} \cap \{q_{X_u,j}^1 \leq q_{X_u,j}^2 \} = \emptyset$ and the integrand is positive in the above equation, we have $ |q_{X_u,j}^1 - q_{X_u,j}^2| = 0 $ for all $(i,j) \in {\bf N^+} \times {\bf N^+},$ and thus $ Q^1 = Q^2$ characterize the same process $(X_t)_{t \geq 0}$ .

3.3. Cumulative inaccuracy measure between coherent systems

We now consider a coherent system with n independent components, as in Barlow and Proschan [Reference Barlow and Proschan4], subject to failures and repairs according to a birth-and-death process. The state of component i, for $1 \leq i \leq n$ , denoted by $X_t(i)$ , assumes values in the space $\{0, 1\}$ , where 1 represents an operating state and 0 a repair state. Starting from an operating state, component i continues operating for a length of time that is exponentially distributed with parameter $\lambda(i)$ , and starting from a repair state, component i continues in repair for a length of time that is exponentially distributed with parameter $\mu(i)$ . We observe

\begin{equation*} \sigma \{ X_s(i), 1 \leq i \leq s, 0 \leq s < t\}.\end{equation*}

The coherent system is known to depend on its component vector $ {\bf X}_t = (X_t(1), X_t(2), \ldots,X_t(n))$ through its structure function

\begin{equation*} \phi_t = \phi({\bf X}_t) = \phi (X_t(1), X_t(2), \ldots,X_t(n)),\end{equation*}

which is a monotone increasing function, and each component in the system is relevant; that is, there is a time t and a configuration of component states $ {\bf X}_t $ such that

\begin{equation*} \phi(1_i, {\bf X}_t) - \phi(0_i, {\bf X}_t)= 1,\end{equation*}

where a 1 (resp. 0) in position i denotes an operating (resp. repair) state.

If we set $N_t(i)$ to be the number of failed components i up to time t, from the Doob–Meyer decomposition we know that

\begin{equation*} N_t(i) - \int_0^t \lambda(i) X_s(i) ds \end{equation*}

is an $\Im_t$ -martingale. Furthermore, the number of system failures up to t is given by

\begin{equation*}N_t(\phi) = \sum_{i=1}^n \int_0^t[ \phi(1_i, {\bf X}_s) - \phi(0_i, {\bf X}_s)] dN_s(i).\end{equation*}

Let us now consider the process $ \phi(1_i, {\bf X}_{s_-}) - \phi(0_i, {\bf X}_{s_-})$ to be a predictable version of $\phi(1_i, {\bf X}_s) - \phi(0_i, {\bf X}_s)$ . Because at a jump of $N_t(i)$ no other components change their status, we have that

\begin{equation*} N_t(\phi) - \sum_{i=1}^n \int_0^t [ \phi(1_i, {\bf X}_s) - \phi(0_i, {\bf X}_s)] \lambda(i)X_s(i) ds \end{equation*}
\begin{equation*} = N_t(\phi) - \sum_{i=1}^n \int_0^t [ \phi(1_i, {\bf X}_{s_-}) - \phi(0_i, {\bf X}_{s_-})]\lambda(i)X_s(i) ds \end{equation*}
\begin{equation*} = \sum_{i=1}^n \int_0^t [ \phi(1_i, {\bf X}_s) - \phi(0_i, {\bf X}_s)][ dN_s(i) - \lambda(i)X_s(i) ds ]\end{equation*}

is an $\Im_t$ -martingale; that is, the $\Im_t$ -compensator of $N_t(\phi)$ is

\begin{equation*}\sum_{i=1}^n \int_0^t [ \phi(1_i, {\bf X}_s) - \phi(0_i, {\bf X}_s)] \lambda(i) X_s(i) ds.\end{equation*}

Also, using the same arguments, we can prove that the number of system repairs up to t, given by $M_t(\phi)$ , has $\Im_t$ -compensator

\begin{equation*} \int_0^t \sum_{i=1}^n [ \phi(1_i, {\bf X}_s) - \phi(0_i, {\bf X}_s)] \mu(i) ( 1 - X_s(i)) ds.\end{equation*}

The total operating and repair process, $(N_t(\phi)+ M_t(\phi))_{t \geq 0}$ , has compensator

\begin{equation*}\sum_{i=1}^n \int_0^t [ \phi(1_i, {\bf X}_s) - \phi(0_i, {\bf X}_s)] \lambda(i) X_s(i) ds + \sum_{i=1}^n \int_0^t [ \phi(1_i, {\bf X}_s) - \phi(0_i, {\bf X}_s)] \mu(i) ( 1 - X_s(i)) ds,\end{equation*}

and the cumulative inaccuracy measure at time t between $(N_t(\phi)+ M_t(\phi))_{t \geq 0}$ and $ (N_t^*(\phi)+ M_t^*(\phi))_{t \geq 0}$ , with parameters $\lambda^*$ and $\mu^*$ proposed by the experimenter, is then given by

\begin{equation*} CRI_t( N_t(\phi)+ M_t(\phi), N_t^*(\phi)+ M_t^*(\phi)) \end{equation*}
\begin{equation*} = \mathbb{E}\left[\sum_{i=1}^n \int_0^t \left(\int_0^s [ \phi(1_i, {\bf X}_u) - \phi(0_i, {\bf X}_u)] 1_{\{U_{X_u} = 0 \}} \lambda(i) X_u(i) du \right)ds \right] \end{equation*}
\begin{equation*} + \mathbb{E}\left[\sum_{i=1}^n \int_0^t \left( \int_0^s [ \phi(1_i, {\bf X}_u) - \phi(0_i, {\bf X}_u)] 1_{\{U_{X_u} = 1 \}} \mu(i) ( 1 - X_u(i))du \right)ds \right] \end{equation*}
\begin{equation*} + \mathbb{E}\left[ \sum_{i=1}^n \int_0^t \left(\int_0^s [\phi(1_i, {\bf X}_u) - \phi(0_i, {\bf X}_u)] 1_{\{U_{X_u} = 0 \}} \lambda^*(i) X_u(i) du\right)ds \right] \end{equation*}
\begin{equation*} + \mathbb{E}\left[ \sum_{i=1}^n \int_0^t \left( \int_0^s [ \phi(1_i, {\bf X}_u) - \phi(0_i, {\bf X}_u)] 1_{\{U_{X_u} = 1 \}} \mu^*(i) ( 1 - X_u (i)) du \right)ds\right]. \end{equation*}

Example 6. Let $ \phi_t = \min \{ X_t(1), \ldots , X_t(n)\}$ be an n-component series system, subject to failures and repairs according to a birth-and-death process. As before, $X_t(i)$ assumes values in the space $\{0, 1\}$ , where 1 denotes an operating state and 0 a repair state. Starting from an operating state, component i continues operating for a length of time that is exponentially distributed with parameter $\lambda$ , and starting from a repair state, component i continues in repair for a length of time that is exponentially distributed with parameter $\mu$ .

The number of system failures up to t is given by

\begin{equation*}N_t(\phi) = \sum_{i=1}^n \int_0^t \phi(1_i, {\bf X}_s) dN_s(i),\end{equation*}

and the $\Im_t$ -compensator of $N_t(\phi)$ is

\begin{equation*}\sum_{i=1}^n \int_0^t \phi(1_i, {\bf X}_s) \lambda X_s(i) ds.\end{equation*}

Also, the number of system repairs up to t, given by $M_t(\phi)$ , has $\Im_t$ -compensator

\begin{equation*}\sum_{i=1}^n \int_0^t \phi(1_i, {\bf X}_s) \mu ( 1 - X_s(i)) ds.\end{equation*}

The total operating and repair process, $(N_t(\phi)+ M_t(\phi))_{t \geq 0}$ , has compensator

\begin{equation*}\sum_{i=1}^n \left[ \int_0^t \phi(1_i, {\bf X}_s) \lambda X_s(i) ds + \int_0^t \phi(1_i, {\bf X}_s) \mu ( 1 - X_s(i)) ds \right],\end{equation*}

and the cumulative inaccuracy measure at time t between $(N_t(\phi)+ M_t(\phi))_{t \geq 0}$ and $(N_t^*(\phi)+ M_t^*(\phi))_{t \geq 0}$ , with parameters $\lambda^*$ and $\mu^*$ proposed by the experimenter, is then given by

\begin{equation*} CRI_t( N_t(\phi)+ M_t(\phi), N_t^*(\phi)+ M_t^*(\phi)) \end{equation*}
\begin{equation*} = \mathbb{E}\left[ \sum_{i=1}^n \int_0^t \int_0^s \phi(1_i, {\bf X}_u) 1_{\{U_{X_u} = 0 \}} \lambda X_u(i) du ds \right]\end{equation*}
\begin{equation*} + \mathbb{E}\left[\int_0^t \int_0^s \phi(1_i, {\bf X}_u) 1_{\{U_{X_u} = 1\}} \mu ( 1 - X_u(i))du ds \right] \end{equation*}
\begin{equation*} + \mathbb{E}\left[ \sum_{i=1}^n \int_0^t \int_0^s \phi(1_i, {\bf X}_u) 1_{\{U_{X_u} = 0 \}} \lambda^* X_u(i) du \right]\end{equation*}
\begin{equation*}+ \mathbb{E}\left[ \int_0^t \int_0^s \phi(1_i, {\bf X}_u) 1_{\{U_{X_u} = 1 \}} \mu^* ( 1 - X_u (i)) duds \right] \end{equation*}
\begin{equation*} = \sum_{i=1}^n \int_0^t \int_0^s \left[ \frac{\lambda}{\lambda + \mu } \lambda \mathbb{P}(X_u(i)=1) + \frac{\mu}{\lambda + \mu } \mu \mathbb{P}(X_u(i) = 0) \right]du ds \end{equation*}
\begin{equation*} + \sum_{i=1}^n \int_0^t \int_0^s \left[ \frac{\lambda^*}{\lambda^* + \mu^* } \lambda^* \mathbb{P}(X_u(i)=1) + \frac{\mu^*}{\lambda^* + \mu^* } \mu^* \mathbb{P}(X_u(i) = 0)\right]du ds.\end{equation*}

If the components are in a stationary state, with $ \mathbb{P}(X_0(i) = 1) = \frac{\lambda}{\lambda + \mu }$ or $ \mathbb{P}(X_0^*(i) = 1) = \frac{\lambda^*}{\lambda^* + \mu^* }$ , we have

\begin{equation*} CRI_t( N_t(\phi)+ M_t(\phi), N_t^*(\phi)+ M_t^*(\phi)) \end{equation*}
\begin{equation*} = \sum_{i=1}^n \int_0^t \int_0^s\left[ \left(\frac{\lambda}{\lambda + \mu }\right)^2 \lambda + \left(\frac{\mu}{\lambda + \mu }\right)^2 \mu \right]du ds \end{equation*}
\begin{equation*} + \sum_{i=1}^n \int_0^t \int_0^s \left[ \left(\frac{\lambda^*}{\lambda^* + \mu^* }\right)^2 \lambda^* + \left(\frac{\mu^*}{\lambda^* + \mu^* }\right)^2 \mu^*\right] du ds \end{equation*}
\begin{equation*} = n \left[\left(\frac{\lambda}{\lambda + \mu }\right)^2 \lambda + \left(\frac{\mu}{\lambda + \mu }\right)^2 + \left(\frac{\lambda^*}{\lambda^* + \mu^* }\right)^2 \lambda^* + \left(\frac{\mu^*}{\lambda^* + \mu^* }\right)^2 \mu^* \right] \frac{t^2}{2}.\end{equation*}

4. Concluding remarks

In the framework of univariate point processes and martingale theory, a convenient way of working on dependence problems, we analyze here an inaccuracy measure between two univariate non-explosive point processes. In the first part, we deal with the concept of the cumulative inaccuracy measure for non-explosive point processes, and we consider its applications to a minimal repair point process and to a minimally repaired coherent system. Next, we specialize the cumulative inaccuracy measure to point process occurrence times relative to Markov chains. Special attention is paid to the case when the processes satisfy the proportional risk process properties, in which case we can characterize the Markov chain through the cumulative inaccuracy measure. We demonstrate the theoretical results using several examples of applications to birth-and-death processes and pure birth processes. We also apply the results to a coherent system, observed physically, at the component level, subjected to failure and repair according to a Markovian property. We are currently examining the comparison of point process cumulative residual inaccuracy measures through stochastic inequalities; we aim to use conditional classes of distributions studied extensively in reliability theory. We hope to report these findings in a future paper.

Acknowledgements

The authors thank São Paulo University for supporting this research. Thanks are also due to the anonymous reviewers and the editor for their many constructive comments and suggestions on an earlier version of this manuscript, which led to this much improved version.

Funding information

There are no funding bodies to thank in relation to the creation of this article.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Arjas, E. and Yashin, A. (1988). A note on random intensities and conditional survival functions. J. Appl. Prob. 25, 630635.Google Scholar
Asadi, M. and Zohrevand, Y. (2007). On the dynamic cumulative residual entropy. J. Statist. Planning Infer. 137, 19311941.CrossRefGoogle Scholar
Aven, T. and Jensen, U. (1999). Stochastic Models in Reliability. Springer, New York.10.1007/b97596CrossRefGoogle Scholar
Barlow, R and Proschan, F. (1981). Statistical Theory of Reliability and Life Testing: Probability Models. To Begin With, Silver Spring, MD.Google Scholar
Bueno, V. C. and Balakrishnan, N. (2022). A cumulative residual inaccuracy measure for coherent systems at component level and under nonhomogeneous Poisson processes. Prob. Eng. Inf. Sci. 36, 294319.CrossRefGoogle Scholar
Cover, T. M., and Thomas, J. A. (2006). Elements of Information Theory. John Wiley, Hoboken, NJ.Google Scholar
Dellacherie, C. (1972). Capacités et processus stochastiques. Springer, Berlin.CrossRefGoogle Scholar
Di Crescenzo, A. and Longobardi, M. (2009). On cumulative entropies. J. Statist. Planning Infer. 139, 40724087.CrossRefGoogle Scholar
Ebrahimi, N. (1996). How to measure uncertainty about residual lifetime. Sankhya A 58, 4857.Google Scholar
Ghosh, A. and Kundu, C. (2017). Bivariate extension of cumulative residual and past inaccuracy measures. Statist. Papers 60, 22252252.10.1007/s00362-017-0917-5CrossRefGoogle Scholar
Gini, C. (1912). Variabilità e mutabilità: contributo allo studio delle distribuzioni e delle relazioni statistiche. Cuppini, Bologna.Google Scholar
Gini, C. (1965). On the characteristics of Italian statistics. J. R. Statist. Soc. A 128, 89100.10.2307/2343438CrossRefGoogle Scholar
Gini, C. (1966). Statistical Methods. Instituto di Statistica e Ricerca Sociale ‘Corrado Gini’, Università degli Studi di Roma, Rome.Google Scholar
Hart, P. E. (1975). Moment distributions in economics: an exposition. J. R. Statist. Soc. A 138, 423434.CrossRefGoogle Scholar
Kayal, S. and Sunoj, S. M. (2017). Generalized Kerridge’s inaccuracy measure for conditionally specified models. Commun. Statist. Theory Meth. 46, 82578268.CrossRefGoogle Scholar
Kayal, S., Sunoj, S. M. and Rajesh, G. (2017). On dynamic generalized measures of inaccuracy. Statistica 77, 133148.Google Scholar
Kerridge, D. F. (1961). Inaccuracy and inference. J. R. Statist. Soc. B 23, 184194.CrossRefGoogle Scholar
Kumar, V. and Taneja, H. C. (2011). A generalized entropy-based residual lifetime distributions. Biomathematics 4, 171184.CrossRefGoogle Scholar
Kumar, V. and Taneja, H. C. (2012). On dynamic cumulative residual inaccuracy measure. In Proc. World Congress on Engineering 2012, Vol. I, International Association of Engineers, Hong Kong.Google Scholar
Kumar, V. and Taneja, H. C. (2015). Dynamic cumulative residual and past inaccuracy measures. J. Statist. Theory Appl. 14, 399412.CrossRefGoogle Scholar
Kundu, A. and Kundu, C. (2017). Bivarate extension of (dynamic) cumulative past entropy. Commun. Statist. Theory Meth. 46, 41634180.CrossRefGoogle Scholar
Kundu, C., Di Crescenzo, A. and Longobardi, M. (2016). On cumulative residual (past) inaccuracy for truncated random variables. Metrika 79, 335356.10.1007/s00184-015-0557-5CrossRefGoogle Scholar
La Haye, R. and Zizler, P. (2019). The Gini mean difference and variance. Metron 77, 4352.CrossRefGoogle Scholar
Psarrakos, G. and Di Crescenzo, A. (2018). A residual inaccuracy measure based on the relevation transform. Metrika 81, 3759.10.1007/s00184-017-0633-0CrossRefGoogle Scholar
Rao, M. (2005). More on a new concept of entropy and information. J. Theoret. Prob. 18, 967981.CrossRefGoogle Scholar
Rao, M., Chen, Y., Vemuri, B. C. and Wang, F. (2004). Cumulative residual entropy: a new measure of information. IEEE Trans. Inf. Theory 50, 12201228.CrossRefGoogle Scholar
Rényi, A. (1961). On measures of entropy and information. In Proc. 4th Berkeley Symp. Math. Statist. Prob., Vol. I, University of California Press, Berkeley, pp. 547561.Google Scholar
Shannon, C. E. (1948). A mathematical theory of communication. Bell Systems Tech. J. 27, 379423.CrossRefGoogle Scholar
Varma, R. S. (1966). Generalization of Renyi’s entropy of order $\alpha$ . J. Math. Sci. 1, 3448.Google Scholar