1 Introduction
Point processes on the line, generated by transitions of continuoustime Markov chains (CTMCs), have been studied intensely by the applied probability community over the past few decades under the umbrella of matrix analytic methods (MAM) [Reference Latouche and Ramaswami19]. These have been applied to teletraffic [Reference Akar, Oguz and Sohraby1], business networks [Reference Herbertsson and Rootzén17], social operations research [Reference Xing, Li, Bi, WilamowskaKorsak and Zhang29] and biological systems [Reference Olsson and Hössjer25]. The typical model referred to as the Markovian arrival process (MAP) is comprised of a finitestate irreducible CTMC, which generates events at selected instances of state change or according to Poisson processes modulated by the CTMC. MAPs are dense in the class of point processes so that they can essentially approximate any point process [Reference Asmussen and Koole5]. Yet, at the same time, they are analytically tractable and may often be incorporated effectively within more complex stochastic models [Reference Neuts24].
In general, treating point processes as stationary often yields a useful mathematical perspective that matches scenarios when there is no known dependence on time. In describing a point process, we use $N(t)$ to denote the number of events during the interval $[0,t]$ , and further use the sequence $\{T_{n}\}$ to denote the sequence of interarrival times. Two notions of stationarity are useful in this respect. A point process is timestationary if the distribution of the number of events within a given set is invariant to time translations. That is, for a given subset of the timeline, ${\cal T} \subset {\mathbb R}$ , the distribution of the number of events during ${\cal T}$ is the same as the distribution of the number of events during the shifted set $\{u+t ~~ u \in {\cal T}\}$ for any time shift t.
A point process is eventstationary if the joint distribution of $T_{k_{1}},\ldots ,T_{k_{n}}$ is the same as that of $T_{k_{1}+\ell },\ldots ,T_{k_{n}+\ell }$ for any integer sequence of indices $k_{1}, \ldots ,k_{n}$ and any integer shift $\ell $ . For a given model of a point process, one may often consider either the eventstationary or the timestationary case. The probability laws of both cases agree in the case of the Poisson process; however, this is not true in general. For MAPs, timestationarity and eventstationarity are easily characterized by the initial distribution of the background CTMC. Starting it at its stationary distribution yields timestationarity, and starting at the stationary distribution of an associated discretetime Markov chain yields eventstationarity. This associated discretetime Markov chain, with transition probability matrix P, defined later (see Section 2), records the state at every time point $T_{n}$ and is indexed by the (discrete) time index, n.
A common way to parametrise MAPs is by considering the generator Q of a finitestate irreducible CTMC and setting $Q= C + D$ . The offdiagonal elements of the matrix C determine state transitions without event counts, while the diagonal elements of C are negative. The matrix D determines state transitions associated with event counts. Such parametrisation hints at considering two special cases: Markovmodulated Poisson processes (MMPPs) arising from a diagonal matrix D, and Markovswitched Poisson processes (MSPPs) arising from a diagonal matrix C. We note that a single probability law of a MAP can be represented in multiple ways, and the diagonal D or the diagonal C representations for MMPP or respectively MSPP are only one of many forms.
MMPPs are a widely used class of processes in modelling and are a typical example of a Cox process, also known as a doubly stochastic Poisson process [Reference Grandel, Dold and Eckman11, Reference Tang, Prabhu and Pacheco28]. For a detailed outline of a variety of classic MMPP results, see the work of Fischer and MeierHellstern [Reference Fischer and MeierHellstern10] and the references therein. MSPPs were introduced by Liu and Neuts [Reference Liu and Neuts20] and to date, have not been very popular for modelling. However, the duality of diagonal D versus diagonal C motivates us to consider and contrast both these processes. We also note that hyperexponential renewal processes are special cases of MSPPs as well as Markovian transition counting processes (MTCPs). The latter is an alternative model for the MMPPs [Reference Asanjarani, Nazarathy, van Do, Takahashi, Yue and Nguyen3].
There are two key random variables associated with the given MAP, which we denote as $T_{1}^{\boldsymbol {\pi }}$ and $T_{1}^{\boldsymbol {\alpha }}$ . These are distributed as the first interarrival time, $T_{1}$ , in the timestationary and eventstationary versions, respectively. We note that ${\boldsymbol {\pi }}$ and ${\boldsymbol {\alpha }}$ notionally indicate the nature of the initial distribution of the MAP, and make sense when considering Proposition 2.2 later.
Our focus in this paper is on secondorder properties of MMPPs and MSPPs and related traits. Consider the squared coefficient of variation and the limiting index of dispersion of counts given by
respectively.
From a modelling perspective, MMPPs are often used for bursty traffic. In fact, in certain cases, modelling folklore has assumed that for MMPPs, $c^{2} \ge 1$ (see for example, [Reference Heindl16]). The $c^{2} \ge 1$ assumption is perhaps used without direct proof due to the fact that $d^{2} \ge 1$ is straightforward to verify, and there is a similarity between these measures (for example, for a renewal process, $c^{2} = d^{2}$ ). However, as we highlight in this paper, determining such “burstiness” properties is not straightforward.
A related property is that $T_{1}^{\boldsymbol {\alpha }}$ exhibits a decreasing hazard rate (DHR), where for a random variable with probability distribution function (PDF) $f(t)$ and cumulative distribution function (CDF) $F(t)$ , the hazard rate is
A further related property is the stochastic order (see Ch. VII of [Reference Asmussen, Rozovskii and Yor4]) $T_{1}^{\boldsymbol {\pi }} \ge _{\mbox {st}} T_{1}^{\boldsymbol {\alpha }}$ . We denote these properties as follows:
All these properties are related and in this paper, we highlight the relationships between (I), (II), (III) and (IV) and establish the following. For MSPPs and MMPPs of order $2$ , properties (I)–(IV) hold. For general MMPPs, it is known that (I) holds; however, a counterexample shows that (II) does not hold and we conjecture that (III) and (IV) hold. We note that (I) differs from (II)–(IV) as $d^{2}$ is not directly related to the random variable $T_{1}$ .
Our interest in this class of problems stemmed from relationships between different types of MAPs; as discussed by Nazarathy and Weiss [Reference Nazarathy and Weiss23], and Asanjarani and Nazarathy [Reference Asanjarani, Nazarathy, van Do, Takahashi, Yue and Nguyen3]. Once it became evident that $c^{2} \ge 1$ for MMPPs is an open problem even though it is acknowledged as a modelling fact in folklore, we searched for alternative proof avenues. This led to the stochastic order in (IV) as well as to considering DHR properties.
The remainder of the paper is structured as follows. In Section 2, we present preliminaries, focusing on the relationships between (I)–(IV) as well as defining MMPPs and MSPPs. In Section 3, we present our main results and some conjectures about MMPPs. We conclude the paper in Section 4.
2 Preliminaries
First, consider (I)–(IV) and their relationships. To establish (III), there are several possible avenues based on (I), (II) and (IV). We now explain these relationships.

• Using (I): First, from the theory of simple point processes on the line, note the relationship between $d^{2}$ and $c^{2}$ :
$$ \begin{align*} d^{2} = c^{2}\bigg(1+2\sum_{j=1}^{\infty} \frac{\mathrm{Cov}(T_{0}^{\boldsymbol{\alpha}},T_{j}^{\boldsymbol{\alpha}})}{\mathrm{Var}(T_{0}^{\boldsymbol{\alpha}})}\bigg), \end{align*} $$(for more details, see [Reference Gusella12]). However, the autocorrelation structure is typically intractable, and hence does not yield results. If we were focusing on a renewal process where $T_{i}$ and $T_{j}$ are independent for $i \neq j$ , then this immediately shows that $d^{2} = c^{2}$ . However, our focus is broader, and hence (I) (indicating that $d^{2} \ge 1$ ) does not appear to yield a path for (III). 
• Using (II): An alternative way is to consider (II) and use the fact that for any DHR random variable, we have $c^{2} \ge 1$ (see [Reference Stoyan and Daley27]). Hence, if (II) holds, then (III) holds.

• Using (IV): We have the following lemma, implying that (III) is a consequence of the stochastic order (IV).
Lemma 2.1. Consider a simple nontransient point process on the line, and let $T_{1}^{\boldsymbol {\pi }}$ and $T_{1}^{\boldsymbol {\alpha }}$ represent the first interarrival time in the timestationary case and the eventstationary case, respectively. Then, $c^{2} \ge 1$ if and only if $\mathbb {E}[T_{1}^{\boldsymbol {\pi }}] \geq \mathbb {E}[T_{1}^{\boldsymbol {\alpha }}]$ .
Proof. From point process theory (see for example, (3.4.17) of [Reference Daley and VereJones9]),
where
Now,
and we obtain the result.
MAPs: We now describe the MAP. A MAP of order p (MAP $_{p}$ ) is generated by a twodimensional Markov process $\{(N(t), X(t)); t \geq 0\}$ on state space $\{0,1, 2, \ldots \}\times \{1,2, \ldots , p\}$ . The counting process $N(\cdot )$ counts the number of “events” in $[0,t]$ with $\mathbb P(N(0)=0)=1$ . The phase process $X(\cdot )$ is an irreducible CTMC with state space $\{1, \ldots , p\}$ , initial distribution $\boldsymbol {\eta }$ and generator matrix Q. A MAP is characterized by parameters $(\boldsymbol {\eta }, C,D)$ , where the matrix C has negative diagonal elements and nonnegative offdiagonal elements, and records the rates of phase transitions which are not associated with an event. The matrix D has nonnegative elements and describes the rates of phase transitions which are accompanied with an event (an increase of $N(t)$ by 1). Moreover, we have $Q=C+D$ . More details are in the books by Asmussen [Reference Asmussen, Rozovskii and Yor4, Ch. XI] and He [Reference He13, Ch. 2].
MAPs are attractive due to the tractability of many of their properties, including distribution functions, generating functions, and moments of both counting process $N(t)$ and the sequence of interarrival times $\{T_{n}\}$ .
Since Q is assumed irreducible and finite, it has a unique stationary distribution $\boldsymbol {\pi }$ satisfying $\boldsymbol {\pi } Q = \mathbf {0}^{\prime }$ and $\boldsymbol {\pi } \mathbf {1} = 1$ , where $\mathbf {0}^{\prime }$ is a row vector of 0’s and $\mathbf {1}$ is a column vector of 1’s. Note that from $Q {\mathbf 1} = \mathbf {0}^{\prime }$ , we have $C {\mathbf 1}=D {\mathbf 1}$ . Of further interest is the discretetime Markov chain with irreducible stochastic matrix $P = (C)^{1}D$ and stationary distribution $\boldsymbol {\alpha }$ , where $\boldsymbol {\alpha } P=\boldsymbol {\alpha }$ and $\boldsymbol {\alpha } \mathbf {1}=1$ .
Observe the relation between the stationary distributions $\boldsymbol {\pi }$ and $\boldsymbol {\alpha }$ :
where $\lambda ^{*} = \boldsymbol {\pi } D {\mathbf 1} =  \boldsymbol {\pi } C {\mathbf 1}$ .
The following known proposition, as distilled from the literature [Reference Asmussen, Rozovskii and Yor4, Ch. XI], provides the key results on MAPs that we use in this paper. It shows that $T_{1}$ is a phasetype (PH) random variable with parameters $(\boldsymbol {\eta }, C)$ , for the initial distribution of the phase and the subgenerator matrix, respectively. It further shows that the initial distribution of the phase process may render the MAP as timestationary or eventstationary. This proposition also motivates the notation $T_{1}^{\boldsymbol {\pi }}$ and $T_{1}^{\boldsymbol {\alpha }}$ .
Proposition 2.2. Consider a MAP with parameters ( $\boldsymbol {\eta }$ ,C,D), then for $t\ge 0$ ,
Further, if $\boldsymbol {\eta } = \boldsymbol {\pi }$ , then the MAP is timestationary, and if $\boldsymbol {\eta }=\boldsymbol {\alpha }$ , it is eventstationary, where $\boldsymbol {\pi }$ and $\boldsymbol {\alpha }$ are the stationary distributions of Q and P, respectively.
Note that for such a $PH(\boldsymbol {\eta }, C)$ random variable, the density $f(t)$ and the hazard rate $h(t)$ are respectively
Further, to show DHR, the derivative of the hazard rate
We now describe the secondorder properties associated with each case.
Eventstationary case: For an eventstationary MAP (sometimes an eventstationary MAP is referred to as an intervalstationary MAP [Reference Fischer and MeierHellstern10]), where $\boldsymbol {\eta }=\boldsymbol {\alpha }$ , the (generic) interarrival time is phasetype distributed, $PH(\boldsymbol {\alpha }, C)$ , and thus the kth moment is
with the first and second moments (here represented in terms of $\boldsymbol {\pi }$ and C)
Hence, the squared coefficient of variation (SCV) of events (intervals) has the following simple formula:
Timestationary case: For the timestationary MAP with parameters $(\boldsymbol {\eta }, C,D)$ (where $\boldsymbol {\eta }=\boldsymbol {\pi }$ ) (see [Reference Asmussen, Rozovskii and Yor4]),
where $o(t)/t$ vanishes as $t \to \infty $ , and $D_{Q}^{\sharp }$ is the deviation matrix associated with Q defined by the formula
Note that in some sources [Reference Asmussen, Rozovskii and Yor4, Reference Narayana and Neuts21], the variance formula (2.4) is presented in terms of the matrix $Q^{}:=(\mathbf {1}{\boldsymbol {\pi }}  Q)^{1}$ . The relation between these two matrices is $Q^{}=D_{Q}^{\sharp } +\mathbf {1}{\boldsymbol {\pi }}$ (see [Reference CoolenSchrijner and Van Doorn8]).
Applying (2.3) and (2.4), we can write $d^{2}$ in terms of MAP parameters as
MMPP: A MAP with a diagonal matrix D is an MMPP. MMPPs correspond to doubly stochastic Poisson processes (also known as Cox processes), where the modulating process is driven by a CTMC. MMPPs have been used extensively in stochastic modelling and analysis [Reference Fischer and MeierHellstern10]. The parameters of an MMPP $_{p}$ are $D= \text { diag}(\lambda _{i})$ , where $\lambda _{i} \geq 0$ for $i=1, \ldots , p$ , and $C=QD$ . For MMPPs, (2.3) and (2.4) can be simplified by using the relations:
where the last equation comes from (2.1).
MSPP: A MAP with a diagonal matrix C is an MSPP [Reference Artalejo, GómezCorral and He2, Reference He13]. For MSPP $_{p}$ , events switch between p Poisson processes, each with rate $D_{11}, \ldots , D_{pp}$ , where each phase transition also incurs an event. This is in contrast to MMPPs where the transition of the phase process never generates events. We denote the diagonal elements of C with $\gamma _{i}$ and hence,
Note that analysing MSPPs is considerably easier than MMPPs, because a diagonal C is much easier to handle than a nondiagonal C, and in an (irreducible) MMPP, C must be nondiagonal.
Properties (I)–(IV) for MAPs: Using the results above, for any irreducible MAP with matrices C and D, the main properties (I)–(IV) of this paper can be formulated as
respectively, where $D_{Q}^{\sharp }$ is the deviation matrix defined by the formula
3 Main results
We now show that an MSPP and an MMPP $_{2}$ satisfy (I)–(IV). Establishing (I), $d^{2} \ge 1$ , is not a difficult task for both MMPPs and MSPPs.
Proposition 3.1. An MMPP and an MSPP exhibit $d^{2} \ge 1$ .
Proof. This is a wellknown result that for all doubly stochastic Poisson processes (Cox processes), $d^{2} \ge 1$ [Reference Kingman18, Ch. 6]. So, we have the proof for MMPPs.
The fact that for a given MMPP, we have $d^{2}\geq 1$ and using (2.5) results in
However, all MAPs satisfy $ {\boldsymbol {\pi }}D D_{Q}^{\sharp } D \mathbf {1}={\boldsymbol {\pi }}(C) D_{Q}^{\sharp } (C) \mathbf {1}$ . Since for an MSPP, $C$ is a diagonal nonnegative matrix, from (3.1), we have (2.6).
It is not difficult to show that for (II), DHR holds for MSPP.
Proposition 3.2. For an MSPP, the hazard rate of the stationary interarrival time is nonincreasing.
Proof. By denoting $C=\text {diag}(\gamma _{i})$ and $\mathbf {u}=e^{Ct} \mathbf {1}$ , the lefthand side of (2.7) can be written as
Denoting $v_{i}=\alpha _{i} u_{i}$ , setting $q_{i}=v_{i}/\ (\sum _{i=1}^{p} v_{i})$ and dividing by $(\sum _{i=1}^{p} v_{i})^{2}$ results in
The above expression can be viewed as the negative of the variance of a random variable that takes values $\gamma _{i}$ with probability $q_{i}$ . Therefore, we have (2.7).
However, somewhat surprisingly, MMPPs do not necessarily possess DHR. An exception is MMPP $_{2}$ , as shown in Proposition 3.7; however, DHR for higherorder MMPPs does not always hold. The gist of the following example was communicated to us by Milkós Telek and Illés Horváth.
Example 3.3. Set
As shown in Figure 1, the hazard rate function for an MMPP with the above matrices is not monotone. Hence, at least for general MMPPs, trying to show (III), $c^{2}\ge 1$ , via hazard rates is not a viable avenue.
Since hazard rates do not appear to be a viable path for establishing (III) for MMPPs, an alternative may be to consider the stochastic order (IV). Starting with MSPPs, we see that this property holds.
Proposition 3.4. For an MSPP, $T_{1}^{\boldsymbol {\pi }} \ge _{\mbox {st}} T_{1}^{\boldsymbol {\alpha }}$ .
Proof. Using inequality (2.9), we claim
Without loss of generality, we assume that there is an order $0<\gamma _{1}\leq \gamma _{2} \leq \cdots \leq \gamma _{p}$ (with $\gamma _{i} \neq \gamma _{j}$ for some $i,j$ ) for diagonal elements of the matrix $(C)$ . There is a possibility that for $1 < p^{\prime } < p$ , $0=\gamma _{1}=\gamma _{2} = \cdots =\gamma _{p^{\prime }1}$ and $0<\gamma _{p^{\prime }}$ ; however, in the rest of the proof, we assume that $p^{\prime }=1$ , meaning that all $\gamma _{i}$ are strictly positive. Adapting to the case of $p^{\prime }>1$ is straightforward.
Now, $\{\lambda ^{*}\gamma _{i}\}_{i=1, \ldots , p}$ is a nonincreasing sequence and therefore in the sequence $\{\pi _{i}\alpha _{i}\}=\{\pi _{i}(\lambda ^{*}\gamma _{i})/\lambda ^{*}\}$ , when an element $\pi _{k}\alpha _{k}$ is negative, all the elements $\pi _{i}\alpha _{i}$ for $i\geq k$ are also negative. Moreover, both $\boldsymbol {\pi }$ and $\boldsymbol {\alpha }$ are probability vectors, so $(\boldsymbol {\pi }\boldsymbol {\alpha })\mathbf {1}=\sum _{i}(\pi _{i}\alpha _{i})=0$ . Therefore, at least the first element in the sequence $\{\pi _{i}\alpha _{i}\}=\{\pi _{i}(\lambda ^{*}\gamma _{i})/\lambda ^{*}\}$ is positive. Hence, there exists an index $1 < k \leq p$ such that $\pi _{i}\alpha _{i}$ for $i=1, \ldots , k1$ is nonnegative and for $i=k,\ldots , p$ is negative. Therefore,
where $L=\sum _{i=1}^{k1}({\pi _{i}}{ \alpha _{i}})=\sum _{i=k}^{p}({ \alpha _{i}}\pi _{i})$ . Here the inequalities are a consequence of the ordering of $\gamma _{i}$ .
Hence, via Lemma 2.1 or alternatively via the DHR property in Proposition 3.2, we have the following result.
Corollary 3.5. For an MSPP, $c^{2} \ge 1$ .
In fact, for MSPPs, this is an easy result and it can also be proved independently by using the Cauchy–Schwarz inequality. Further, we can find an upper bound for $c^{2}$ as follows.
Proposition 3.6. An MSPP with diagonal matrix $C=\text { diag} (\gamma _{i})$ satisfies
where $\kappa =(\min {\gamma _{i}}+\max {\gamma _{i}})/2$ and $\gamma =\sqrt {(\min {\gamma _{i}})(\max {\gamma _{i}})}$ .
Proof. From (2.2), we have $c^{2} + 1= 2\boldsymbol {\pi } C {\mathbf 1} \boldsymbol {\pi } C^{1} {\mathbf 1}$ . For $C=\text { diag}(\gamma _{i})$ , we have $C^{1}=\text {diag}(1/\gamma _{i})$ and so,
From the Cauchy–Schwarz inequality and the Kantorovich inequality [Reference Steele26],
Combination of (3.3) and (3.4) results in
which completes the proof.
For MMPPs, we do not have general proofs for properties (III) and (IV). Still, for MMPPs of order two (MMPP $_{2}$ ), things are easier and we are able to show that all properties (I)–(IV) hold.
Proposition 3.7. For an MMPP $_{2}$ , $c^{2}\geq 1$ and $d^{2}\geq 1$ , $h(t)$ is DHR and the stochastic order $T_{1}^{\boldsymbol {\pi }} \ge _{\mbox {st}} T_{1}^{\boldsymbol {\alpha }}$ holds. Further, $c^{2} = 1$ and $d^{2}=1$ if and only if $\lambda _{1} = \lambda _{2}$ .
Proof. Consider an MMPP $_{2}$ with parameters
Then, $\boldsymbol {\pi }=(\sigma _{2},\, \sigma _{1})/(\sigma _{1}+\sigma _{2})$ . As in [Reference Heffes and Lucantoni15], evaluation of the transient deviation matrix through (for example) Laplace transform inversion yields
Therefore, from (1.1),
Further, explicit computation yields
Thus, it is evident that the MMPP $_{2}$ has $d^{2}>1$ and $c^{2}> 1$ as long as $\lambda _{1} \neq \lambda _{2}$ , and $d^{2}=c^{2}=1$ when $\lambda _{1} = \lambda _{2}$ .
For DHR and the stochastic order, first we note that for an MMPP $_{2}$ with the above parameters, $\boldsymbol {\alpha }=(\sigma _{2} \lambda _{1}, \sigma _{1} \lambda _{2})/(\sigma _{1} \lambda _{2}+\sigma _{2} \lambda _{1})$ . By setting $a=\sigma _{2} \lambda _{1} +\lambda _{2}(\sigma _{1}+\lambda _{1})>0$ and $b= \sigma _{1} +\sigma _{2} + \lambda _{1}+\lambda _{2}>0$ , after some simplification, the lefthand side of (2.7) is given by
which is strictly negative for $\lambda _{1} \neq \lambda _{2}$ and is zero for $\lambda _{1} = \lambda _{2}$ . For the stochastic order, by evaluating the matrix exponential and simplifying,
which is strictly positive for $\lambda _{1} \neq \lambda _{2}$ and is zero for $\lambda _{1} = \lambda _{2}$ .
We note that $c^{2}>1$ in Proposition 3.7 may also be derived by considering a canonical form representation of secondorder MAPs as presented by Bodrog et al. [Reference Bodrog, Heindl, Horváth and Telek7]. In that paper, they suggested two canonical form representations for MAPs of order 2. MMPPs fit the first canonical form which is represented in Definition 1 of [Reference Bodrog, Heindl, Horváth and Telek7] via,
We use tildes so as not to confuse the notation of Bodrog et al. [Reference Bodrog, Heindl, Horváth and Telek7] with the notation of our current paper. Given a representation of an MMPP $_{2}$ as in (3.5), one may use a similarity transform jointly for C and D to find an equivalent canonical form representation. In this case, a similarity transformation matrix which satisfies $B C = \tilde {C} B$ and $B D = \tilde {D} B$ can be represented as
Solving a system of equations for $b_{1}, b_{2}$ , and the parameters of the canonical form, we find four different equivalent solutions in the sense that the resulting canonical form agrees with the original MMPP $_{2}$ . One possible solution is
Here, $r= \sqrt {b^{2}4a}$ , where a and b are as in the proof of Proposition 3.7. With this similarity transform, the parameters of the canonical form (3.6) yielding MMPP $_{2}$ are
Having this representation in hand, from the work of Bodrog et al. [Reference Bodrog, Heindl, Horváth and Telek7, Lemma 1], we observe that $c^{2}> 1$ .
Note that one can also represent MSPPs of order 2 in terms of their canonical representations [Reference Bodrog, Heindl, Horváth and Telek7]. However, for MSPP, the correlation coefficient can be both positive and negative, and thus one needs to choose between two alternative canonical representations.
3.1 Conjectures for MMPP
We embarked on this research due to the folklore assumption that for MMPP, $c^{2} \ge 1$ (III) (see for example, [Reference Heindl16]). To date, there is no known proof for an arbitrary irreducible MMPP. Still, we conjecture that both (III) and (IV) hold for MMPPs.
Conjecture 3.8. For an irreducible MMPP, $c^{2}\ge 1$ .
Conjecture 3.9. For an irreducible MMPP, $T_{1}^{\boldsymbol {\pi }} \ge _{\mbox {st}} T_{1}^{\boldsymbol {\alpha }}$ .
In an attempt to disprove these conjectures, we carried out a numerical experiment by generating random instances of MMPPs. Each instance is created by first generating a matrix Q where offdiagonal entries are generated from a uniform $(0,1)$ distribution and diagonal entries that ensure row sums are $0$ . We then generate a matrix D with diagonal elements that are exponentially distributed with rate $1$ . Such a $(Q,D)$ pair then implies ${\boldsymbol {\pi }}$ and ${\boldsymbol {\alpha }}$ using their definitions in Section 2. For each such MMPP, we calculate $\boldsymbol {\pi } C {\mathbf 1} \boldsymbol {\pi } C^{1} {\mathbf 1} 1 $ as in (2.8) (for Conjecture 3.8) and $(\boldsymbol {\pi }  \boldsymbol {\alpha })e^{C t} \mathbf {1}$ as in (2.9) (for Conjecture 3.9), where we take $t \in \{0,0.2,0.4,\ldots ,9.8,10.0\}$ . Getting a strictly negative value on any of these would imply a counterexample for the associated conjecture.
We repeated this experiment for random MMPP instances of orders $3,4,5$ and $6$ with $10^{6}$ instances for each order. In all the cases, the calculated quantities were greater than $10^{15}$ . Note that in certain cases, the quantity associated with (IV) was negative and lying in the range $(10^{15},10^{16}]$ . We attribute these negative values to numerical error stemming from the calculation of the matrix exponential $e^{Ct}$ . We ran our experiments with the Julia programming language, V1.0 [Reference Nazarathy and Klok22]. The calculation time was approximately 1.5 hours on a desktop computer.
The purpose of carrying out such experiments is to try to disprove the conjectures; however, we failed to do so. Also note that when considering uniformly generated instances, it is possible that we miss corner cases that may disprove the conjectures. In trying to find some classes of such corner cases, we considered random cyclic Q matrices with nonzero entries similar to (3.2). We generated $10^{6}$ such (order $4$ ) examples, and all agreed with (III) and (IV) as well.
4 Conclusion
We have highlighted various related properties for point processes on the line and MAPs exhibiting diagonal matrices (C or D), in particular. Bodrog et al. [Reference Bodrog, Heindl, Horváth and Telek7] showed $c^{2} \ge 1$ for MMPPs of order $2$ , and here we have derived it alongside $T_{1}^{\boldsymbol {\pi }} \ge _{\mbox {st}} T_{1}^{\boldsymbol {\alpha }}$ using alternative means. Further, using a similar technique to our MSPP proof (proof of Proposition 3.2), we can also show properties for MMPPs with symmetric C matrices. However, for general MMPPs of order greater than $2$ , proving these properties remains an open problem.
We note that stepping outside of the matrix analytic paradigm and considering general Cox processes is also an option. In fact, since any Cox process can be approximated by an MMPP, one may formulate versions of conjectures $1$ and $2$ for Cox processes under suitable regularity conditions.
There is also a related branch of questions dealing with characterizing the Poisson process via $c^{2} = 1$ and considering when an MMPP is Poisson. For example, for the general class of MAPs, Bean and Green [Reference Bean and Green6] provide a condition for determining if a given MAP is Poisson. It is not hard to construct a MAP with $c^{2} = 1$ that is not Poisson. However, we conjecture that all MMPPs with $c^{2} = 1$ are Poisson. Yet, we do not have a proof. Further, we conjecture that for an MMPP, if $c^{2} = 1$ , then all $\lambda _{i}$ are equal (the converse is trivially true). We do not have a proof for orders beyond two. Related questions also hold for the more general Cox processes.
We also note that the MSPP class of processes generalizes hyperexponential renewal processes, as well as a class of processes called a Markovian transition counting process (MTCP), as in the authors’ previous work [Reference Asanjarani, Nazarathy, van Do, Takahashi, Yue and Nguyen3]. Further, since the stationary interval interarrival times of MMPP processes are PH distributed, it is also of interest to consider related problems dealing with moments of phasetype distributions (see for example, [Reference He, Horváth, Horváth and Telek14] and the references therein).
Acknowledgements
Azam Asanjarani’s research is supported by the Faculty Research & Development Fund (FRDF), Faculty of Science, University of Auckland.Yoni Nazarathy is supported by DP180101602. We thank Peter Taylor, QiMing He, Søren Asmussen, Milkós Telek, and Illés Horváth for useful discussions and insights.