1. Introduction
An alternate measure of entropy, based on the distribution function rather than the density function of a random variable, called the cumulative residual entropy (CRE), was proposed originally by Rao et al. [Reference Rao25]. It was subsequently extended to the cumulative residual inaccuracy measure by Kumar and Taneja [Reference Kumar and Taneja19].
The main inaccuracy measure for the uncertainty of two positive and absolutely continuous random variables, S and T, defined in a complete probability space $(\Omega, \Im, \mathbb{P})$ is that of Kerridge [Reference Kerridge17], given by
where f and g are the probability density functions of T and S, respectively.
In the case when S and T are identically distributed, the Kerridge inaccuracy measure gives the wellknown Shannon entropy [Reference Shannon28], which plays an important role in many areas of science, such as probability and statistics, financial analysis, engineering and information theory; see Cover and Thomas [Reference Cover and Thomas6]. The Shannon entropy is defined as
A main drawback of the Shannon entropy is that for some probability distributions, it may be negative, and then it is no longer an uncertainty measure. This drawback was removed in the Varma entropy [Reference Varma29], which provides a generalization of order $\alpha$ and type $\beta$ of both the Shannon entropy and the Rényi entropy [Reference Rényi27]. The Varma entropy is important as a measure of complexity and uncertainty to describe many chaotic systems in physics, electronics, and engineering. The Varma entropy is defined as
It can be shown that
which is indeed the Rényi entropy. Also, for $\beta = 1$ , if $\alpha \rightarrow 1$ , the Varma entropy reduces to the Shannon entropy.
Recently, Kumar and Taneja [Reference Kumar and Taneja18] introduced a generalized cumulative residual entropy of order $\alpha$ and type $\beta$ based on Varma entropy, and a dynamic version of it, given by
and
Several authors subsequently studied various properties of these information measures. For example, Ebrahimi [Reference Ebrahimi9] proposed a measure of uncertainty about the remaining lifetime of a system working at time t, given by $H(T_t),$ where $T_t = (Tt  T>t).$ Kayal and Sunoj [Reference Kayal and Sunoj15] and Kayal et al. [Reference Kayal, Sunoj and Rajesh16] presented a generalization of it and discussed its theoretical properties.
Rao et al. [Reference Rao, Chen, Vemuri and Wang26] and Rao [Reference Rao25] provided an extension of the above measure, the cumulative residual entropy for T, by using the survival functions of T instead of the probability density function in the Shannon entropy. Asadi and Zohrevand [Reference Asadi and Zohrevand2] studied the corresponding dynamic measure using the conditional survival function $\mathbb{P}(Tt > x T>t)$ . Di Crescenzo and Longobardi [Reference Di Crescenzo and Longobardi8] discussed an analogous measure, based on the distribution function, which is known as the cumulative past entropy of T.
Kerridge’s measure of inaccuracy has also been extended in a similar way by Kumar and Taneja [Reference Kumar and Taneja19, Reference Kumar and Taneja20]. Kundu et al. [Reference Kundu, Di Crescenzo and Longobardi22] considered the measures of Kumar and Taneja [Reference Kumar and Taneja19, Reference Kumar and Taneja20] and obtained several properties for random variables that are left, right, and doubletruncated. Quite recently, bivariate extensions of cumulative residual (past) inaccuracy measures have been discussed by Ghosh and Kundu [Reference Ghosh and Kundu10] and Kundu and Kundu [Reference Kundu and Kundu21].
The cumulative residual inaccuracy measure of Kumar and Taneja [Reference Kumar and Taneja19] between S and T is defined as
where $\overline{F} = 1  F$ and $\overline{G} = 1  G$ are the reliability functions of T and S, respectively, F and G are the corresponding distribution functions, and $\Lambda_S(t) = \log\overline{G}(t)$ is the cumulative hazard function of lifetime S. It is important to note that the expression is valid in the set $\{ t > S \wedge T \}$ , where $ S \wedge T = \min \{ S, T\}$ , and by convention, we set $ 0 \log0 = 0$ .
Indeed, $ \varepsilon(S, T)$ represents the information content when one uses $ \overline{G}(t)$ , the survival function asserted by the experimenter, instead of the true survival function $\overline{F}(t)$ , because information is missing or incorrect. Some transformation of this measure is present in the work of Psarrakos and Di Crescenzo [Reference Psarrakos and Di Crescenzo24].
Our earlier paper [Reference Bueno and Balakrishnan5] extended the definition to a symmetric inaccuracy measure based on two component lifetimes T and S, which are finite positive absolutely continuous random variables defined in a complete probability space $(\Omega, \Im, \mathbb{P})$ , with $\mathbb{P}(S \not= T )=1$ , through the family of sub $\sigma$ algebras $(\Im_t)_{t \geq 0}$ of $\Im$ , where
satisfies Dellacherie’s conditions of rightcontinuity and completeness; see [Reference Dellacherie7]. Consider, through the Doob–Meyer decomposition (see Aven and Jensen [Reference Aven and Jensen3]), the unique $ \Im_t$ predictable, integrable compensator processes $(A_t)_{t \geq 0 }$ and $(B_t)_{t \geq 0 }$ such that $ 1_{\{ T \leq t \}}  A_t $ and $ 1_{\{ S \leq t \}}  B_t $ are 0mean $\Im_t$ martingales. Then, by the wellknown equivalence results between distribution functions and compensator processes (see Arjas and Yashin [Reference Arjas and Yashin1]), it follows that $ A_t =  \log \overline{F}(t \wedge T\Im_t) $ and $ B_t =  \log \overline{G}(t \wedge S\Im_t)$ . Identifying $\Lambda_S(t)$ and $B_t$ , in the set $ \{ S > t \}$ , the paper [Reference Bueno and Balakrishnan5] then established that
Also, by using the same arguments as above, we have
In the following definition, we now present a symmetric generalization of the Taneja–Kumar inaccuracy measure.
Definition 1. Let S and T be continuous positive random variables defined in a complete probability space $(\Omega, \Im, P)$ . Then the cumulative residual inaccuracy measure is defined as
Thus, $ CRI_{T,S}$ can be seen as a dispersion measure when using a lifetime S asserted by the experimenter’s information as the true lifetime T. Provided we identify random variables that are equal almost everywhere, $ CRI_{S,T}$ is a metric in the $L^1$ space of random variables. Hence, if we have $ CRI_{T,S} = 0,$ we can conclude that the survival function asserted by the experimenter is indeed the true one.
Remark 1. If, in Definition 1, T and S are independent and identically distributed, then $E[ TS]$ is known as the Gini mean difference (GMD), introduced by Gini [Reference Gini11]. As a dispersion measure it can be compared with the variance of $TS$ . This point generated a debate between Gini and the AngloSaxon statisticians. The most popular presentation of the variance is as a second central moment of the distribution. The most popular form of the GMD is the expected absolute difference between two random variables that are independent and identically distributed. However, as shown by Hart [Reference Hart14] and the covariance presentation, the GMD can also be defined as a central moment. Had both sides known about the alternative representations of GMD, this debate, which was a source of conflict between the Italian school and what Gini viewed as the Western schools, could have been averted; see Gini [Reference Gini11, Reference Gini12]. The interested reader can find many results concerning GMD in La Haye and Zizler [Reference La Haye and Zizler23].
Remark 2. It is of interest to obtain the same result considering the appropriate integration domain from 0 to the maximum $( \max \{ S, T\} = S\vee T)$ of the series system $( \min \{ S, T\} = S\wedge T)$ compensator process $A_s + B_s$ ; see Aven and Jensen [Reference Aven and Jensen3]:
In the framework of univariate point processes and martingale theory, we analyze here an inaccuracy measure between two point processes related to Markov chains. The rest of this paper consists of two sections. Section 2 deals with the cumulative inaccuracy measure concept for nonexplosive point processes and its applications to a minimal repair point process and to a minimally repaired coherent system. In Section 3, the cumulative inaccuracy measure is specialized to point process occurrence times relating to Markov chains. Special attention is paid to the case when the processes satisfy proportional risk process properties, in which case we characterize the Markov chain through the cumulative inaccuracy measure. We demonstrate the theoretical results with several examples of applications to birthanddeath processes and pure birth processes. We also apply the results to a coherent system, observed physically, at component level, subjected to fail and repair according to a Markovian property.
2. Inaccuracy measure between point processes
2.1. Cumulative inaccuracy measure between nonexplosive point processes
A univariate point process over $\mathbb{R}^+$ can be described by an increasing sequence of random variables or by means of its corresponding counting process.
Definition 2. A univariate point process is an increasing sequence $ T = (T_n)_{n \geq 0}$ , with $T_0 = 0$ , of positive extended random variables, $ 0 \leq T_1 \leq T_2 \leq \ldots$ , defined in a complete probability space $(\Omega, \Im, \mathbb{P})$ . The inequalities are strict unless $T_n = \infty$ . If $T_{\infty} = \lim_{n \rightarrow \infty} T_n = \infty$ , the point process is said to be nonexplosive.
Another equivalent way of describing a univariate point process is through a counting process $N = (N_t)_{t \geq 0}$ with
which is, for each realization w, a rightcontinuous step function with $N_0(w) = 0$ . As $(N_t)_{t \geq 0}$ and $(T_n)_{n \geq 0}$ carry the same information, the associated counting process is also called a point process.
The mathematical description of our observations is given by the internal family of sub $\sigma$ algebras of $\Im$ , denoted by $(\Im_t^T)_{t \geq 0}$ , where
satisfies the Dellacherie conditions of rightcontinuity and completeness.
For a mathematical basis for applied stochastic processes, one may refer to Aven and Jensen [Reference Aven and Jensen3]. In particular, an extended and positive random variable $\tau$ is an $\Im_t^T$ stopping time if, and only if, $\{ \tau \leq t \} \in \Im_t^T$ , for all $t \geq 0$ ; an $\Im_t^T$ stopping time $\tau$ is said to be predictable if an increasing sequence $(\tau_n)_{n \geq 0 }$ of $\Im_t^T$ stopping times, $\tau_n < \tau $ , exists such that $\lim_{n\rightarrow \infty } \tau_n = \tau $ ; an $\Im_t^T$ stopping time $\tau$ is totally inaccessible if $\mathbb{P}(\tau = \sigma < \infty) = 0$ for every predictable $\Im_t^T$ stopping time $\sigma$ .
In what follows, we assume that relations between random variables and measurable sets always hold with probability one, which means that the term $\mathbb{P}$ almost surely can be suppressed.
The point process $(N_t^{ {\bf T} })_{t \geq 0}$ is adapted to $ (\Im_t^{ {\bf T} } )_{t \geq 0}$ and $E[N_t^{ {\bf T} } \Im_s^{ {\bf T} }] \geq N_s^{ {\bf T} } $ for $s < t$ ; that is, $N_t^{ {\bf T} }$ is an uniformly integrable $\Im_t^{ {\bf T} }$ submartingale. Then, from the Doob–Meyer decomposition, there exists a unique rightcontinuous nondecreasing $\Im_t^{ {\bf T} }$ predictable and integrable process $(A_t^{ {\bf T} })_{t \geq 0}$ , with $A_0^{ {\bf T} }= 0$ , such that $(M_t^{ {\bf T}})_{t \geq 0}$ , with $ N_t^{ {\bf T} } = A_t^{ {\bf T} } + M_t^{ {\bf T} }$ , is a uniformly integrable $\Im_t^{ {\bf T} }$ martingale. In many cases, the $\Im_t^{ {\bf T} }$ compensator, $(A_t^{ {\bf T} })_{t \geq 0}$ , of a counting process $(N_t^{ {\bf T} })_{t \geq 0}$ can be represented in the form of an integral as
for some nonnegative ( $\Im_t^{ {\bf T} }$ progressively measurable) stochastic process $(\lambda_t^{ {\bf T} })_{t \geq 0}$ , called the $\Im_t^{ {\bf T}}$ intensity of $(N_t^{ {\bf T} })_{t \geq 0}$ .
The compensator process is expressed in terms of conditional probabilities, given the available information, and it generalizes the classical notion of hazard. Intuitively, it corresponds to whether the failure is going to occur now, on the basis of all observations available up to but not including the present time.
Following Aven and Jensen [Reference Aven and Jensen3], the compensator process is given by the following theorem.
Theorem 1. Let $(N_t^{ {\bf T} })_{t \geq 0}$ be an integrable point process and $(\Im_t^{ {\bf T} })_{t \geq 0}$ its internal history. Suppose that for each n there exists a regular conditional distribution of $T_{n+1} T_n$ , given the past $ \Im_{T_n}^{ {\bf T} },$ of the form
where $g_n(w, s)$ is a measurable function. Then the process given by $\lambda_ t^{{\bf T}} =\Sigma_{n=0}^{\infty} \lambda_t^n$ , where
is called the $\Im_t^{ {\bf T} } $ intensity of $N_t^{ {\bf T} }$ , and
is an $\Im_t^T$ martingale.
We note that the compensator process
is defined by its parts.
Our aim now is to define a cumulative residual inaccuracy measure between two independent nonexplosive point processes, T and S, and then to proceed to using a superposition process.
Definition 3. The superposition of two univariate point processes ${\bf T} = (T_n)_{n \geq 0}$ and ${\bf S} = (S_n)_{n \geq 0}$ , defined in a complete probability space $(\Omega, \Im, \mathbb{P})$ with compensator processes $(A_t)_{t \geq 0}$ and $(B_t)_{t \geq 0}$ , respectively, is the marked point process $(V_n, U_n)_{n \geq 1}$ , where ${\bf V} = (V_n)_{n \geq 0}$ , with $V_0 = 0$ , is a univariate point process and ${\bf U} = (U_n)_{n \geq 0}$ , the indicator process, is a sequence of random variables taking values in a measurable space $(\{0,1 \} ,\sigma(\{0,1 \} )$ , resulting from pooling together the time points of events occurring in each of the two separate point processes. Here 0 stands for an occurrence of the process T, $U_n = T_k$ for some k, in which case $V_n = \max_{1 \leq j \leq n}\{ (1U_j)\cdot V_j \}$ , and 1 stands for an occurrence of the process S, $U_n = S_j$ for some j, in which case $V_n = \max_{1 \leq j \leq n}\{ U_j \cdot V_j \}$ .
Now, we define
as the number of occurrences of the process T and
as the number of occurrences of the process S on the superposition process.
The observed history is thus
Theorem 2. Let $(V_n, U_n)_{n \geq 1}$ be a marked point process, the superposition of two univariate point processes ${\bf T} = (T_n)_{n \geq 0}$ and ${\bf S} = (S_n)_{n \geq 0}$ , defined in a complete probability space $(\Omega, \Im, \mathbb{P})$ with $\Im_t^V $ compensator processes $(A_t)_{t \geq 0}$ and $(B_t)_{t \geq 0}$ , respectively. Furthermore, let $N_t(0)$ and $N_t(1)$ be the $\Im_t$ submartingales defined as the number of occurrences of the processes T and S, respectively. Then $N_t(0)$ has $\Im_t$ compensator
and $N_t(1)$ has $\Im_t$ compensator
Proof. To prove this theorem, we use the known result that the integration of an $\Im_t$ predictable process with respect to an integrable $\Im_t$ martingale of bounded variation is an $\Im_t$ martingale.
Observe that the deterministic process
is leftcontinuous and, therefore, $\Im_t$ predictable, implying that
where $ M_t^n = 1_{ \{ T_n \leq t\}}  A_t^n$ , is an $\Im_t$ martingale.
As the sum of $\Im_t$ martingales is an $\Im_t$ martingale, we readily have that
is an $\Im_t$ martingale. As the compensator is unique, the proof is readily completed. The proof for the $N_t(1)$ process follows in an analogous manner.
Remark 3. In view of Theorem 2.2 and Definition 3, we observe, without loss of generality, the compensator definition modifications as
where $V_n^{{*}} = \max_{1 \leq j < n} \{ U_j \cdot V_j \},$ and
where $V_n^{{*}} = \max_{1 \leq j < n} \{ (1 U_j)\cdot V_j \}.$
In view of Definition 1, in the introduction, we define a cumulative inaccuracy measure between two univariate point processes at any $\Im_t$ stopping time $\tau$ , in particular, at time t, as follows.
Definition 4. Let $ {\bf T} = (T_n)_{n \geq 0}$ and ${\bf S} = (S_n)_{n \geq 0}$ be univariate point processes with $\Im_t^V $ compensator processes $(A_t)_{t \geq 0}$ and $(B_t)_{t \geq 0}$ , respectively, defined in a complete probability space $(\Omega, \Im, \mathbb{P})$ . Let $(V_n)_{n \geq 0}$ be their superposition process. Then the cumulative residual inaccuracy measure at time t between $ {\bf T}$ and $ {\bf S}$ is given by
It is important to observe that, under Theorem 2.2, the indicator process is essential in Definition 4.
An interpretation of Definition 4 is given by the following theorem.
Theorem 3. Let $ {\bf T} = (T_n)_{n \geq 0}$ and ${\bf S} = (S_n)_{n \geq 0}$ be two nonexplosive univariate point processes, and let ${\bf V} = (V_n)_{n \geq 0}$ be their superposition process. Then
where
Proof. We let $(\tau_n^{ \bf T})_{n \geq 0}$ be an increasing sequence of $\Im_t$ stopping times as the localizing sequence of the stopped martingale $( N_{t\wedge { \tau_n^{ \bf T}}}^{ \bf T)}  A_{t \wedge { \tau_n^{ \bf T} }} ) _{t \geq 0} $ , and let $(\tau_n^{ \bf S})_{n \geq 0}$ be an increasing sequence of $\Im_t$ stopping times as the localizing sequence of the stopped martingale $( N_{t\wedge {\tau_n^{ \bf S} }}^{ \bf S}  B_{t \wedge {\tau_n^{ \bf S}}} ) _{t \geq 0}$ . We then apply the optimal sampling theorem; see Aven and Jensen [Reference Aven and Jensen3].
Note that $\tau_n = \tau_n^{ \bf T} \vee \tau_n^{ \bf S}$ is also an $\Im_t$ stopping time and that the point process $(S_k)_{k \geq 0}$ defines a partition of $ \mathbb{\Re^+}$ , that is, $ \mathbb{\Re^+ }= \cup_{k=0}^{\infty} (S_{k1}, S_k]$ . Therefore, we can write
However, the compensator differential $dA_u$ is defined by parts and can be written as
where $dA_u(j)$ is the differential compensator of $ 1_{\{ T_j \leq t\}}$ defined in $( T_{j1}, T_j]$ , and 0 otherwise. It then follows that
Using similar arguments we can prove that
Hence we have
which is the sum of the interarrival times of the superposition process, where
As $( N_{t\wedge \tau}^{\bf V}) _{t \geq 0} $ is uniformly integrable, we let $\lim_{n \rightarrow \infty} \tau_n = \infty$ , and provided that we identify random variables that are equal almost everywhere, the quantity $CRI_{\infty}(N^{ {\bf T} }, N^{ {\bf S} }) = \Sigma_{k=1}^{\infty} V_k  V_{k1} $ , as t goes to infinity, can be seen as a dispersion measure in the $L^1$ space of sequences of random variables when using the point process S, which represents the information asserted by the experimenter about the true point process ${ \bf T}$ .
2.2. Application to minimal repair point processes
A repair is minimal if the intensity $\lambda_t^T$ is not affected by the occurrence of failures, or, in other words, if we cannot determine the failure time points from observation of $\lambda_t^T$ . Formally, we have the following definition.
Definition 5. Let $T = (T_n)_{n \geq 0}$ be a univariate point process with an integrable point process $N^T$ and corresponding $\Im_t$ intensity $(\lambda_t^T)_{t \geq 0}.$ Let $ \Im_t^{\lambda^T} = \sigma( \lambda_s^T, 0 \leq s \leq t) $ be the filtration generated by $\lambda^T$ . Then the point process T is said to be a minimal repair process (MRP) if none of the variables $T_n$ , $n \geq 0$ , for which $P(T_n < \infty) > 0$ is an $\Im_t^{\lambda^T}$ stopping time.
If T is a nonhomogeneous Poisson process, $\lambda_t = \lambda(t)$ is a timedependent deterministic function, and this means that the age does not get changed as the result of a failure. Here, $ \Im_t^{\lambda^T} = \{ \Omega, \emptyset \}$ for all $ t \in \mathbb{R^+}$ , and the failure times $T_n$ are not $\Im_t^{\lambda^T}$ stopping times.
Example 1. Let $ (T_n)_{n \geq 0 }$ be a Weibull process with parameters $\beta$ and $\theta_1$ . Let $ (S_n)_{n \geq 0 }$ be a Weibull process with parameters $\beta$ and $\theta_2$ asserted by the experimenter.
In practice, we consider the ordered lifetimes $T_1, \ldots ,T_n $ with a conditional reliability function given by
for $0 \leq t_{i1} < t_i$ , where the $t_i$ are the ordered observations. The $\Im^T$ compensator process is then
Furthermore, with respect to $ (S_n)_{n \geq 0 }$ , the $\Im^S$ compensator process is
Note that the compensator process in a minimal repair point process is independent of the occurrence times, in which case the indicator process does not apply. Therefore, the cumulative inaccuracy measure at time t is given by
Example 2. (Application to a coherent system minimally repaired at component level.) Suppose we observe, as in Barlow and Proschan [Reference Barlow and Proschan4], the lifetimes of a system with three components, $U_1$ , $U_2$ , and $U_3$ , which are independent and identically exponentially distributed with parameter $\lambda$ , through the filtration
The system with lifetime $T_1 = U_1 \wedge ( U_2 \vee U_3 )$ has intensity $\lambda_t^{T_1} = \lambda+ \lambda 1_{\{ U_2 \wedge U_3 \leq t \}}$ , and clearly $T_1$ is not an $\Im_t^{\lambda^{T_1}}$ stopping time.
At system failure $T_1$ , the component that causes the system to fail is repaired minimally. As the component lifetimes are independent and identically distributed, the additional lifetime, given by the lifetime $U_4$ , is independent of and distributed identically to $U_1$ , $U_2$ , and $U_3$ , and the repaired system then has lifetime $T_2 = T_1 + U_4.$
We allow for repeated minimal repairs considering a sequence of random variables, $(U_n)_{n \geq 1}$ , that are independent and identically exponentially distributed with parameter $\lambda$ . Then $ T_1 = U_1 \wedge (U_2 \vee U_3) $ and $ T_{n+1} = T_n + U_{n+3},$ $ n \geq 2$ , successively, constituting a minimal repair point process with compensator
where T is the actual system lifetime.
Let us now consider another minimally repaired coherent system, $S_1$ , asserted by the experimenter, with the same structure function, but component lifetimes $V_1$ , $V_2$ , and $V_3$ , which are independent and identically exponentially distributed with parameter $\lambda^*$ and compensator process
where S is the actual system lifetime.
Then, in the set $\{ t \leq T \wedge S \}$ , the cumulative inaccuracy measure at time t is given by
Clearly, the expression for $ CRI_t(N^T, N^S)$ can be negative. However, we observe that, always, the superposition process is minimally repaired with an exponential lifetime with mean $\frac{1}{\lambda + \lambda^*}$ . We then consider $t \geq \frac{1}{\lambda + \lambda^*}$ , resulting in a positive $ CRI_t(N^T, N^S)$ .
Also, in the minimally repaired coherent system, the compensator process is independent of occurrence times, in which case the indicator process does not apply.
3. Inaccuracy measure between point processes related to Markov chains
3.1. Inaccuracy measure between occurrence times in Markov chains
Let $(X_t)_{t \geq 0}$ be an Evalued process defined in a probability space $(\Omega,\Im, \mathbb{P})$ and adapted to some history $(\Im_t)_{t \geq 0}$ . The observations are through its internal history
for all $t \geq 0$ , and $ \Im_t^X \subseteq \Im_t$ for all $t \geq 0$ . Then $\Im_{\infty}^X $ records all the events linked to the process $(X_t)_{t \geq 0}$ .
The process $(X_t)_{t \geq 0}$ is said to be an $\Im_t$ Markov process if, and only if, for all $t \geq 0$ , $ \sigma(X_s, s > t)$ and $\Im_t$ are independent, given $ X_t$ . In particular, if E is the set of natural numbers ${\bf N^+}$ , we call $(X_t)_{t \geq 0}$ an $\Im_t$ Markov chain. In what follows, we consider a rightcontinuous Markov chain with left limits.
The $\Im_t$ Markov chain is associated with a sequence of its sojourn times $ (T_{n+1}  T_n)_{n \geq 0}$ , with $ T_0 = 0$ , and its infinitesimal characteristics $Q = [q_{i,j}]$ , $(i,j) \in {\bf N^+} \times {\bf N^+}$ . If, for each natural number i, we have
then the chain is said to be stable and conservative. We set $q_{i,i} =  q_i$ .
The interpretation of the terms $q_i$ and $q_{i,j}$ is as follows:
We are now interested in the cumulative inaccuracy process between point processes related to Markov chain occurrence times, with

• $N_t^X(k, l)$ denoting the number of transitions from state k to state l in the interval (0, t],

• $N_t^X(l)$ denoting the number of transitions into state l during the interval (0, t], and

• $N_t^X$ denoting the number of transitions in (0, t].
Now, the occurrence observation times are through the internal family of sub $\sigma$ algebras of $\Im$ ,
and satisfy the Dellacherie conditions of rightcontinuity and completeness. This family of counts leads to the same information as the sequence of occurrence times $ {\bf T}$ and hence provides an equivalent point process description. Clearly, from the sojourn times interpretation, the $ T_n,$ $n > 0$ , are totally inaccessible $\Im_t$ stopping times. In a certain way, absolutely continuous lifetimes are totally inaccessible $\Im_t^{ {\bf T} }$ stopping times.
Let $(X_t)_{t \geq 0}$ be a rightcontinuous ${\bf N^+}$ valued $\Im_t$ Markov chain with lefthand limits which is stable and conservative, with associated sequence of occurrence times ${\bf T} = (T_n)_{n \geq 1}$ and matrix of infinitesimal characteristics $ Q = [q_{i,j}], i, j \in {\bf N^+} $ . Then a martingale property of this process is given in the following theorem.
Theorem 4. Let $(X_t)_{t \geq 0}$ be a rightcontinuous ${\bf N^+}$ valued $\Im_t$ Markov chain with lefthand limits, which is stable and conservative with associated matrix $ Q = [q_{i,j}], i, j \in {\bf N^+}$ . Let f be a nonnegative function from ${\bf N^+} \times {\bf N^+}$ to $\Re_+$ , with
Then
is an $\Im_t$ martingale.
Proof. The argument to be used is the Lévy formula, which states that if f is a nonnegative function from $ {\bf N^+} \times {\bf N^+}$ to $\mathbb{R}^+$ , then for any $0 \leq s \leq t$ we have
By the $\Im_t$ Markov property of $X_t$ , i.e., the conditional independence of $\Im_s$ and $(X_u, u > s)$ given $X_s$ , conditioning on $\sigma(X_s)$ in the above equation becomes equivalent to conditioning with respect to $\Im_s$ , and so
Now, if $f\,:\, {\bf N^+} \times {\bf N^+} \rightarrow \mathbb{R}^+$ is such that
we have
As $ \int_0^s \sum_{j \neq X_u}q_{X_u,j}f(X_u,j) du $ is $\Im_s$ measurable, we can conclude that
that is,
is an $\Im_t$ martingale.
Example 3. Let $(X_t)_{t \geq 0}$ be a birthanddeath process with parameters $\lambda_n$ and $\mu_n$ , such that $E[X_0] < \infty$ . If $\sum_{n=0}^{\infty} \int_0^t \lambda_n \mathbb{P}(X_s = n) ds < \infty $ , the number of upward jumps in (0, t] is $N_t^+ = \sum_{n=0}^{\infty} N_t(n,n+1)$ .
From Theorem 3.1, we have that
is an $\Im_t$ martingale.
As $\sum_{n=0}^{\infty} \int_0^t \lambda_n \mathbb{P}(X_s = n) ds < \infty $ , we have that
is an $\Im_t$ martingale.
It then follows from Definition 4 that the cumulative inaccuracy measure at time t between $ N_t^+$ and $N_t^{+*} $ is
Furthermore, the number of downward jumps in (0, t] is $N_t^ = \sum_{n=0}^{\infty} N_t(n,n1)$ , associated with the $\Im_t$ martingale
As $\mathbb{E}[X_0] < \infty$ and $\mathbb{E}[ \sum_{n=0}^{\infty} N_t(n,n+1)] < \infty,$ we have
So we have
as an $\Im_t$ martingale.
It then follows from the definition that the cumulative inaccuracy measure at time t between $ N_t^$ and $N_t^{*}$ is
The cumulative inaccuracy measure at time t of the birthanddeath process is
A birthanddeath process is a pure birth process if $\mu_n = 0$ for all n. If the birth rate $\lambda_n = \lambda$ , for all n, the pure birth process is simply a Poisson process.
As above, we have that
is an $\Im_t$ martingale.
To evaluate the cumulative inaccuracy measure at time t between the counting process $ N_t^*$ related to the Markov chain $(X^*_t)_{t \geq 0}$ with parameter $\lambda^*$ —which represents the experimenter’s information about the true Markov chain $(X_t)_{t \geq 0}$ having parameter $ \lambda$ —and the counting process $ N_t$ which corresponds to the latter, using Definition 4 we obtain
Remark 4. From the above discussion on the Poisson process example, we can observe the importance and essence of the indicator process in Definition 4.
Example 4. If in particular, in Theorem 3.1, we choose f to be $f(i,j) = 1_{\{ i=k \}} 1_{\{ j=l \}},$ then
is the number of transitions from state k to state l in the interval $(0, t].$ The $\Im_t$ compensator of $N_t(k,l)$ is $ q_{k,l} \int_0^t 1_{\{ X_u =k\}}du$ , and
by hypothesis.
To evaluate the cumulative inaccuracy measure at time t between the counting process $ N_t^*(k,l)$ related to the Markov chain $(X^*_t)_{t \geq 0}$ with infinitesimal matrix $ Q^* = [q_{i,j}^*]$ , $i, j \in {\bf N^+} $ —which represents the experimenter’s information about the true Markov chain $(X_t)_{t \geq 0}$ having infinitesimal matrix $ Q = [q_{i,j}]$ , $i, j \in {\bf N^+}$ —and the counting process $ N_t(k,l)$ which corresponds to the latter, using Definition 4 we obtain
Once again, as in Remark 2, we observe the importance and essence of the indicator process in Definition 4.
Example 5. Another interesting choice of f is given by $f(i,j) = 1_{\{ j=l\}}$ , in which case
is the number of transitions into state l during the interval $(0, t].$ The $\Im_t$ compensator of $N_t(l)$ is
provided
To evaluate the cumulative inaccuracy measure at time t between the counting process $ N_t^*(l)$ related to the Markov chain $(X^*_t)_{t \geq 0}$ with infinitesimal matrix $ Q^* = [q_{i,j}^*], i, j \in {\bf N^+}$ , which represents the experimenter’s information about the true Markov chain $(X_t)_{t \geq 0}$ that has infinitesimal matrix $ Q = [q_{i,j}]$ , $i, j \in {\bf N^+}$ , and the counting process $ N_t(l)$ which corresponds to the latter, using Definition 4 we have
Here again, as in Remark 2, we observe the importance and essence of the indicator process in Definition 4.
3.2. Markov chain characterization through cumulative inaccuracy measure for a proportional risk process
A Markov chain characterization through the cumulative inaccuracy measure for a proportional risk process can be obtained as follows.
Let $(X_t)_{t \geq 0}$ be a rightcontinuous ${\bf N^+}$ valued $\Im_t$ Markov chain that is stable and conservative, with lefthand limits, associated occurrence times ${ \bf T} = (T_n)_{n \geq 0} $ with $T_0 = 0$ , and matrix of infinitesimal characteristics $Q = [q_{i,j}]$ , $(i,j) \in {\bf N^+} \times {\bf N^+}$ . Let the sequence of occurrence times ${ \bf S} = (S_n)_{n \geq 0}$ , with $S_0 = 0$ , and the matrix of infinitesimal characteristics $Q^* = [q_{i,j}^*]$ , $(i,j) \in {\bf N^+} \times {\bf N^+},$ be asserted as information about the true Markov chain. We say that T and S satisfy the proportional risk hazard process if $ q_{i,j}^* = \alpha q_{i,j}$ , $0 < \alpha \leq 1,$ for all $ (i,j) \in {\bf N^+} \times {\bf N^+}$ .
Theorem 5. If T and ${\bf S } $ satisfy the proportional risk hazard process, then the cumulative residual inaccuracy measure at time t, $ CRI_t(N^{{\bf T}}, N^{{\bf S}}), $ uniquely determines the process $(X_t)_{t \geq 0}$ .
Proof. Let $(X_t)_{t \geq 0}$ be a rightcontinuous ${\bf N }_+ $ valued $\Im_t$ Markov chain with associated occurrence times ${ \bf T}^k$ and infinitesimal characteristics $Q^k = [q_{i,j}^k]$ , $(i,j) \in {\bf N^+} \times {\bf N^+}$ , and let ${ \bf S}^k$ be the occurrence times with infinitesimal characteristics $Q^{k*} = [q_{i,j}^{k*}]$ , $(i,j) \in {\bf N^+} \times {\bf N^+}$ , for $k=1,2.$ We then have
However, as the infinitesimal characteristics $q_{X_{T_n^k},X_{T_{n+1}^k}}^k$ are associated with occurrence times ${\bf T}^k$ , we have
implying that
and
for all n and $k=1,2$ ; that is, the process does not jump at time $S_n$ :
Hence, the result follows from the equivalence equations given by
As $ \{q_{X_u,j}^1 > q_{X_u,j}^2 \} \cap \{q_{X_u,j}^1 \leq q_{X_u,j}^2 \} = \emptyset$ and the integrand is positive in the above equation, we have $ q_{X_u,j}^1  q_{X_u,j}^2 = 0 $ for all $(i,j) \in {\bf N^+} \times {\bf N^+},$ and thus $ Q^1 = Q^2$ characterize the same process $(X_t)_{t \geq 0}$ .
3.3. Cumulative inaccuracy measure between coherent systems
We now consider a coherent system with n independent components, as in Barlow and Proschan [Reference Barlow and Proschan4], subject to failures and repairs according to a birthanddeath process. The state of component i, for $1 \leq i \leq n$ , denoted by $X_t(i)$ , assumes values in the space $\{0, 1\}$ , where 1 represents an operating state and 0 a repair state. Starting from an operating state, component i continues operating for a length of time that is exponentially distributed with parameter $\lambda(i)$ , and starting from a repair state, component i continues in repair for a length of time that is exponentially distributed with parameter $\mu(i)$ . We observe
The coherent system is known to depend on its component vector $ {\bf X}_t = (X_t(1), X_t(2), \ldots,X_t(n))$ through its structure function
which is a monotone increasing function, and each component in the system is relevant; that is, there is a time t and a configuration of component states $ {\bf X}_t $ such that
where a 1 (resp. 0) in position i denotes an operating (resp. repair) state.
If we set $N_t(i)$ to be the number of failed components i up to time t, from the Doob–Meyer decomposition we know that
is an $\Im_t$ martingale. Furthermore, the number of system failures up to t is given by
Let us now consider the process $ \phi(1_i, {\bf X}_{s_})  \phi(0_i, {\bf X}_{s_})$ to be a predictable version of $\phi(1_i, {\bf X}_s)  \phi(0_i, {\bf X}_s)$ . Because at a jump of $N_t(i)$ no other components change their status, we have that
is an $\Im_t$ martingale; that is, the $\Im_t$ compensator of $N_t(\phi)$ is
Also, using the same arguments, we can prove that the number of system repairs up to t, given by $M_t(\phi)$ , has $\Im_t$ compensator
The total operating and repair process, $(N_t(\phi)+ M_t(\phi))_{t \geq 0}$ , has compensator
and the cumulative inaccuracy measure at time t between $(N_t(\phi)+ M_t(\phi))_{t \geq 0}$ and $ (N_t^*(\phi)+ M_t^*(\phi))_{t \geq 0}$ , with parameters $\lambda^*$ and $\mu^*$ proposed by the experimenter, is then given by
Example 6. Let $ \phi_t = \min \{ X_t(1), \ldots , X_t(n)\}$ be an ncomponent series system, subject to failures and repairs according to a birthanddeath process. As before, $X_t(i)$ assumes values in the space $\{0, 1\}$ , where 1 denotes an operating state and 0 a repair state. Starting from an operating state, component i continues operating for a length of time that is exponentially distributed with parameter $\lambda$ , and starting from a repair state, component i continues in repair for a length of time that is exponentially distributed with parameter $\mu$ .
The number of system failures up to t is given by
and the $\Im_t$ compensator of $N_t(\phi)$ is
Also, the number of system repairs up to t, given by $M_t(\phi)$ , has $\Im_t$ compensator
The total operating and repair process, $(N_t(\phi)+ M_t(\phi))_{t \geq 0}$ , has compensator
and the cumulative inaccuracy measure at time t between $(N_t(\phi)+ M_t(\phi))_{t \geq 0}$ and $(N_t^*(\phi)+ M_t^*(\phi))_{t \geq 0}$ , with parameters $\lambda^*$ and $\mu^*$ proposed by the experimenter, is then given by
If the components are in a stationary state, with $ \mathbb{P}(X_0(i) = 1) = \frac{\lambda}{\lambda + \mu }$ or $ \mathbb{P}(X_0^*(i) = 1) = \frac{\lambda^*}{\lambda^* + \mu^* }$ , we have
4. Concluding remarks
In the framework of univariate point processes and martingale theory, a convenient way of working on dependence problems, we analyze here an inaccuracy measure between two univariate nonexplosive point processes. In the first part, we deal with the concept of the cumulative inaccuracy measure for nonexplosive point processes, and we consider its applications to a minimal repair point process and to a minimally repaired coherent system. Next, we specialize the cumulative inaccuracy measure to point process occurrence times relative to Markov chains. Special attention is paid to the case when the processes satisfy the proportional risk process properties, in which case we can characterize the Markov chain through the cumulative inaccuracy measure. We demonstrate the theoretical results using several examples of applications to birthanddeath processes and pure birth processes. We also apply the results to a coherent system, observed physically, at the component level, subjected to failure and repair according to a Markovian property. We are currently examining the comparison of point process cumulative residual inaccuracy measures through stochastic inequalities; we aim to use conditional classes of distributions studied extensively in reliability theory. We hope to report these findings in a future paper.
Acknowledgements
The authors thank São Paulo University for supporting this research. Thanks are also due to the anonymous reviewers and the editor for their many constructive comments and suggestions on an earlier version of this manuscript, which led to this much improved version.
Funding information
There are no funding bodies to thank in relation to the creation of this article.
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.