Hostname: page-component-76fb5796d-5g6vh Total loading time: 0 Render date: 2024-04-25T16:55:10.277Z Has data issue: false hasContentIssue false

Capacity and Error Exponents of Stationary Point Processes under Random Additive Displacements

Published online by Cambridge University Press:  04 January 2016

Venkat Anantharam*
Affiliation:
University of California, Berkeley
François Baccelli*
Affiliation:
University of Texas at Austin and INRIA
*
Postal address: Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA, 94720, USA. Email address: ananth@eecs.berkeley.edu
∗∗ Postal address: Department of Mathematics, The University of Texas at Austin, Austin, TX 78712-1202, USA. Email address: baccelli@math.utexas.edu
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Consider a real-valued discrete-time stationary and ergodic stochastic process, called the noise process. For each dimension n, we can choose a stationary point process in ℝn and a translation invariant tessellation of ℝn. Each point is randomly displaced, with a displacement vector being a section of length n of the noise process, independent from point to point. The aim is to find a point process and a tessellation that minimizes the probability of decoding error, defined as the probability that the displaced version of the typical point does not belong to the cell of this point. We consider the Shannon regime, in which the dimension n tends to ∞, while the logarithm of the intensity of the point processes, normalized by dimension, tends to a constant. We first show that this problem exhibits a sharp threshold: if the sum of the asymptotic normalized logarithmic intensity and of the differential entropy rate of the noise process is positive, then the probability of error tends to 1 with n for all point processes and all tessellations. If it is negative then there exist point processes and tessellations for which this probability tends to 0. The error exponent function, which denotes how quickly the probability of error goes to 0 in n, is then derived using large deviations theory. If the entropy spectrum of the noise satisfies a large deviations principle, then, below the threshold, the error probability goes exponentially fast to 0 with an exponent that is given in closed form in terms of the rate function of the noise entropy spectrum. This is obtained for two classes of point processes: the Poisson process and a Matérn hard-core point process. New lower bounds on error exponents are derived from this for Shannon's additive noise channel in the high signal-to-noise ratio limit that hold for all stationary and ergodic noises with the above properties and that match the best known bounds in the white Gaussian noise case.

Type
Stochastic Geometry and Statistical Applications
Copyright
© Applied Probability Trust 

References

Anantharam, V. and Baccelli, F. (2008). A Palm theory approach to error exponents. In Proc. IEEE Internat. Symp. Inf. Theory, IEEE, pp. 17681772.Google Scholar
Anantharam, V. and Baccelli, F. (2010). Information-theoretic capacity and error exponents of stationary point processes under random additive displacements. Preprint. Available at http://uk.arxiv.org/abs/1012.4924v1.Google Scholar
Ashikhmin, A. E., Barg, A. and Litsyn, S. N. (2000). A new upper bound on the reliability function of the Gaussian channel. IEEE Trans. Inf. Theory 46, 19451945.Google Scholar
Barron, A. (1985). The strong Ergodic theorem for densities: generalized Shannon–McMillan–Breiman theorem, Ann. Prob., 13, 12921303.CrossRefGoogle Scholar
Cover, T. M. and Thomas, J. A. (1991). Elements of Information Theory. John Wiley, New York.Google Scholar
Daley, D. J. and Vere-Jones, D. (1988). An Introduction to the Theory of Point Processes. Springer, New York.Google Scholar
Dembo, A. and Zeitouni, O. (1993). Large Deviations Techniques and Applications. Jones and Bartlett, Boston, MA.Google Scholar
El Gamal, A. and Kim, Y.-H. (2011). Network Information Theory. Cambridge University Press.Google Scholar
Gallager, R. G. (1968). Information Theory and Reliable Communication. John Wiley, New York.Google Scholar
Gray, R. M. (2006). Toeplitz and Circulant Matrices: A Review. NOW Publishers, Delft.Google Scholar
Han, T. S. (2003). Information-Spectrum Methods in Information Theory. Springer, Berlin.Google Scholar
Kallenberg, O. (1983). Random Measures, 3rd edn. Akademie-Verlag, Berlin.Google Scholar
Kieffer, J. C. (1974). A simple proof of the Moy–Perez generalization of the Shannon–McMillan theorem. Pacific J. Math. 51, 203206.Google Scholar
Matérn, B. (1960). Spatial Variation, 2nd edn. Springer, Berlin.Google Scholar
Møller, J. (1994). Lectures on Random Voronoı˘ Tessellations (Lecture Notes Statist. 87). Springer, New York.Google Scholar
Poltyrev, G. (1994). On coding without restrictions for the AWGN channel. IEEE Trans. Inf. Theory 40, 409417.CrossRefGoogle Scholar
Shannon, C. E. (1948). A mathematical theory of communication. Bell Sys. Tech. J. 27, 379423, 623656.Google Scholar
Shannon, C. E. (1959). Probability of error for optimal codes in a Gaussian channel. Bell Sys. Tech. J. 38, 611656.CrossRefGoogle Scholar
Varadhan, S. R. S. (1984). Large Deviations and Applications. SIAM, Philadelphia, PA.Google Scholar