Hostname: page-component-8448b6f56d-wq2xx Total loading time: 0 Render date: 2024-04-24T10:59:10.226Z Has data issue: false hasContentIssue false

On the Relationships Between Lumpability and Filtering of Finite Stochastic Systems

Published online by Cambridge University Press:  14 July 2016

James Ledoux*
Affiliation:
Université de Poitiers and CNRS UMR 6086
Langford B. White*
Affiliation:
University of Adelaide
Gary D. Brushe*
Affiliation:
Defence Science and Technology Organisation
*
Postal address: Département de Mathématiques, Université de Poitiers, BP 30179, Téléport 2, Boulevard Marie et Pierre Curie, F-86962 Futuroscope-Chasseneuil cedex, France.
∗∗Postal address: School of Electrical and Electronic Engineering, University of Adelaide, SA 5005, Australia.
∗∗∗Postal address: Signals Analysis Discipline, C3I Division, Defence Science and Technology Organisation, PO Box 1500, Edinburgh, SA 5111, Australia.
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

The aim of this paper is to provide the conditions necessary to reduce the complexity of state filtering for finite stochastic systems (FSSs). A concept of lumpability for FSSs is introduced. In this paper we assert that the unnormalised filter for a lumped FSS has linear dynamics. Two sufficient conditions for such a lumpability property to hold are discussed. We show that the first condition is also necessary for the lumped FSS to have linear dynamics. Next, we prove that the second condition allows the filter of the original FSS to be obtained directly from the filter for the lumped FSS. Finally, we generalise an earlier published result for the approximation of a general FSS by a lumpable FSS.

Type
Research Article
Copyright
Copyright © Applied Probability Trust 2008 

References

[1] Baras, J. S. and Finesso, L. (1992). Consistent estimation of the order of hidden Markov chains. In Stochastic Theory and Adaptive Control (Lecture Notes Control Inf. Sci. 184), eds Duncan, T. E. and Pasik-Duncan, B., Springer, Berlin, pp. 2639.Google Scholar
[2] Benoit, A., Brenner, L., Fernades, P. and Plateau, B. (2004). Aggregation of stochastic automata networks with replicas. Linear Algebra Appl. 386, 111136.CrossRefGoogle Scholar
[3] Breuer, L. and Baum, D. (2005). An Introduction to Queuing Theory. Springer, Berlin.Google Scholar
[4] Buchholz, P. (1994). Exact and ordinary lumpability in finite Markov chains. J. Appl. Prob. 31, 5975.Google Scholar
[5] Derisavi, S., Hermanns, H. and Sanders, W. H. (2003). Optimal state-space lumping in Markov chains. Inf. Process. Lett. 87, 309315.CrossRefGoogle Scholar
[6] Ephraim, Y. and Merhav, N. (2002). Hidden Markov processes. IEEE Trans. Inf. Theory 48, 15181569.Google Scholar
[7] Gilmore, S., Hillston, J. and Ribaudo, M. (2001). An efficient algorithm for aggregating PEPA models. IEEE Trans. Software Eng. 27, 449464.Google Scholar
[8] Golub, G. H. and Van Loan, C. F. (1996). Matrix Computations, 3rd edn. Johns Hopkins University Press, Baltimore, MD.Google Scholar
[9] Grinfeld, M. and Knight, P. A. (2000). Weak lumpability in the k-sat problem. Appl. Math. Lett. 13, 4953.Google Scholar
[10] Gurvits, L. N. and Ledoux, J. (2005). Markov property for a function of a Markov chain: a linear algebra approach. Linear Algebra Appl. 404, 85117.Google Scholar
[11] Hermanns, H., Herzog, U. and Katoen, J.-P. (2002). Process algebra for performance evaluation. Theoret. Comput. Sci. 274, 4387.Google Scholar
[12] Kemeny, J. G. and Snell, J. L. (1976). Finite Markov Chains. Springer, New York.Google Scholar
[13] Ledoux, J. (2005). Recursive filters for partially observable finite Markov chains. J. Appl. Prob. 42, 687697.Google Scholar
[14] Paige, R. L. (2001). Classification and lumpability in the stochastic Hopfield model. Adv. Appl. Prob. 33, 930943.CrossRefGoogle Scholar
[15] Picci, G. (1978). On the internal structure of finite state stochastic processes. In Recent Developments in Variable Structure Systems (Lecture Notes Econom. Math. Systems 162), eds Mohler, R. R. and Ruberti, A., Springer, Berlin, pp. 288304.Google Scholar
[16] Rogers, L. C. G. and Pitman, J. W. (1981). Markov functions. Ann. Prob. 9, 573582.Google Scholar
[17] Spreij, P. J. C. (2001). On the Markov property of a finite hidden Markov chain. Statist. Prob. Lett. 52, 279288.Google Scholar
[18] Spreij, P. J. C. (2003). On hidden Markov chains and finite stochastic systems. Statist. Prob. Lett. 62, 189201.Google Scholar
[19] Sproston, J. and Donatelli, S. (2006). Backward bisimulation in Markov chain model checking. IEEE Trans. Software Eng. 32, 531546.Google Scholar
[20] Tian, J. and Lin, X.-S. (2005). Colored coalescent theory. Discrete Continuous Dynamical Systems 2005, 833845.Google Scholar
[21] White, L. B., Mahony, R. and Brushe, G. D. (2000). Lumpable hidden Markov models – model reduction and reduced complexity filtering. IEEE Trans. Automatic Control 45, 22972306.CrossRefGoogle Scholar
[22] Wonham, W. M. (1979). Linear Multivariable Control: A Geometric Approach. Springer, New York.Google Scholar
[23] Youla, D. C. (1987). Mathematical theory of image restoration by the method of convex projections. In Image Recovery: Theory and Applications, ed. Stark, H., Academic Press, New York, pp. 2977.Google Scholar