Hostname: page-component-8448b6f56d-dnltx Total loading time: 0 Render date: 2024-04-18T11:55:37.632Z Has data issue: false hasContentIssue false

A Central Limit Theorem for Non-Overlapping Return Times

Published online by Cambridge University Press:  14 July 2016

Oliver Johnson*
Affiliation:
University of Cambridge
*
Postal address: Statistical Laboratory, University of Cambridge, Wilberforce Road, Cambridge CB3 0WB, UK. Email address: otj1000@cam.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Define the non-overlapping return time of a block of a random process to be the number of blocks that pass by before the block in question reappears. We prove a central limit theorem based on these return times. This result has applications to entropy estimation, and to the problem of determining if digits have come from an independent, equidistributed sequence. In the case of an equidistributed sequence, we use an argument based on negative association to prove convergence under conditions weaker than those required in the general case.

Type
Research Papers
Copyright
© Applied Probability Trust 2006 

References

Abadi, M. (2004). Sharp error terms and necessary conditions for exponential hitting times in mixing processes. Ann. Prob. 32, 243264.CrossRefGoogle Scholar
Abadi, M. and Galves, A. (2004). A version of Maurer's conjecture for stationary ψ-mixing processes. Nonlinearity 17, 13571366.Google Scholar
Abadi, M., Chazottes, J.-R., Redig, F. and Verbitskiy, E. (2004). Exponential distribution for the occurrence of rare patterns in Gibbsian random fields. Commun. Math. Phys. 246, 269294.Google Scholar
Aldous, D. and Shields, P. (1988). A diffusion limit for a class of randomly-growing binary trees. Prob. Theory Relat. Fields 79, 509542.Google Scholar
Bailey, D. H. and Crandall, R. E. (2002). Random generators and normal numbers. Experiment. Math. 11, 527546.Google Scholar
Bradley, W. F. and Suhov, Y. M. (1997). The entropy of famous reals: some empirical results. Random Comput. Dynam. 5, 349359.Google Scholar
Cover, T. M. and Thomas, J. A. (1991). Elements of Information Theory. John Wiley, New York.Google Scholar
Gao, Y. (2004). Statistical Models in Neural Information Processing. , Brown University.Google Scholar
Grassberger, P. (1989). Estimating the information content of symbol sequences and efficient codes. IEEE Trans. Inf. Theory 35, 669675.CrossRefGoogle Scholar
Hu, T. and Yang, J. (2004). Further developments on sufficient conditions for negative dependence of random variables. Statist. Prob. Lett. 66, 369381.CrossRefGoogle Scholar
Jacquet, P. and Szpankowski, W. (1995). Asymptotic behavior of the Lempel–Ziv parsing scheme and digital search trees. Theoret. Comput. Sci. 144, 161197.Google Scholar
Joag-Dev, K. and Proschan, F. (1983). Negative association of random variables, with applications. Ann. Statist. 11, 286295.Google Scholar
Kac, M. (1947). On the notion of recurrence in discrete stochastic processes. Bull. Amer. Math. Soc. 53, 10021010.CrossRefGoogle Scholar
Kim, D. H. (2003). The recurrence of blocks for Bernoulli processes. Osaka J. Math. 40, 171186.Google Scholar
Kontoyiannis, I. (1998). Asymptotic recurrence and waiting times for stationary processes. J. Theoret. Prob. 11, 795811.Google Scholar
Kontoyiannis, I. and Suhov, Yu. (1994). Prefixes and the entropy rate for long-range sources. In Probability, Statistics and Optimisation, ed. Kelly, F. P., John Wiley, Chichester, pp. 8998.Google Scholar
Maurer, U. M. (1992). A universal statistical test for random bit generators. J. Cryptology 5, 89105.Google Scholar
Newman, C. M. (1980). Normal fluctuations and the FKG inequalities. Commun. Math. Phys. 74, 119128.Google Scholar
Ornstein, D. S. and Weiss, B. (1993). Entropy and data compression schemes. IEEE Trans. Inf. Theory 39, 7883.Google Scholar
Petrov, V. V. (1995). Limit Theorems of Probability Theory: Sequences of Independent Random Variables. Oxford University Press.Google Scholar
Quas, A. N. (1998). An entropy estimator for a class of infinite alphabet processes. Teor. Veroyat. Primen. 43, 610621 (summary in Russian). English translation: Theory Prob. Appl. 43, 496-507.CrossRefGoogle Scholar
Shields, P. C. (1992). Entropy and prefixes. Ann. Prob. 20, 403409.CrossRefGoogle Scholar
Shields, P. C. (1997). String matching bounds via coding. Ann. Prob. 25, 329336.Google Scholar
Wyner, A. D. and Ziv, J. (1989). Some asymptotic properties of the entropy of a stationary ergodic data source with applications to data compression. IEEE Trans. Inf. Theory 35, 12501258.Google Scholar
Wyner, A. J. (1999). More on recurrence and waiting times. Ann. Appl. Prob. 9, 780796.Google Scholar
Ziv, J. (1978). Coding theorems for individual sequences. IEEE Trans. Inf. Theory 24, 405412.CrossRefGoogle Scholar