We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The principle of maximum entropy is a well-known approach to produce a model for data-generating distributions. In this approach, if partial knowledge about the distribution is available in terms of a set of information constraints, then the model that maximizes entropy under these constraints is used for the inference. In this paper, we propose a new three-parameter lifetime distribution using the maximum entropy principle under the constraints on the mean and a general index. We then present some statistical properties of the new distribution, including hazard rate function, quantile function, moments, characterization, and stochastic ordering. We use the maximum likelihood estimation technique to estimate the model parameters. A Monte Carlo study is carried out to evaluate the performance of the estimation method. In order to illustrate the usefulness of the proposed model, we fit the model to three real data sets and compare its relative performance with respect to the beta generalized Weibull family.
For a random binary noncoalescing feedback shift register of width
$n$
, with all
$2^{2^{n-1}}$
possible feedback functions
$f$
equally likely, the process of long cycle lengths, scaled by dividing by
$N=2^n$
, converges in distribution to the same Poisson–Dirichlet limit as holds for random permutations in
$\mathcal{S}_N$
, with all
$N!$
possible permutations equally likely. Such behaviour was conjectured by Golomb, Welch and Goldstein in 1959.
Sonar systems are frequently used to classify objects at a distance by using the structure of the echoes of acoustic waves as a proxy for the object’s shape and composition. Traditional synthetic aperture processing is highly effective in solving classification problems when the conditions are favourable but relies on accurate knowledge of the sensor’s trajectory relative to the object being measured. This article provides several new theoretical tools that decouple object classification performance from trajectory estimation in synthetic aperture sonar processing. The key insight is that decoupling the trajectory from classification-relevant information involves factoring a function into the composition of two functions. The article presents several new general topological invariants for smooth functions based on their factorisations over function composition. These invariants specialise to the case when a sonar platform trajectory is deformed by a non-small perturbation. The mathematical results exhibited in this article apply well beyond sonar classification problems. This article is written in a way that supports full mathematical generality.
We consider the problem of group testing (pooled testing), first introduced by Dorfman. For nonadaptive testing strategies, we refer to a nondefective item as “intruding” if it only appears in positive tests. Such items cause misclassification errors in the well-known COMP algorithm and can make other algorithms produce an error. It is therefore of interest to understand the distribution of the number of intruding items. We show that, under Bernoulli matrix designs, this distribution is well approximated in a variety of senses by a negative binomial distribution, allowing us to understand the performance of the two-stage conservative group testing algorithm of Aldridge.
We present a Markov chain on the n-dimensional hypercube
$\{0,1\}^n$
which satisfies
$t_{{\rm mix}}^{(n)}(\varepsilon) = n[1 + o(1)]$
. This Markov chain alternates between random and deterministic moves, and we prove that the chain has a cutoff with a window of size at most
$O(n^{0.5+\delta})$
, where
$\delta>0$
. The deterministic moves correspond to a linear shift register.
Recently, there is a growing interest to study the variability of uncertainty measure in information theory. For the sake of analyzing such interest, varentropy has been introduced and examined for one-sided truncated random variables. As the interval entropy measure is instrumental in summarizing various system and its components properties when it fails between two time points, exploring variability of such measure pronounces the extracted information. In this article, we introduce the concept of varentropy for doubly truncated random variable. A detailed study of theoretical results taking into account transformations, monotonicity and other conditions is proposed. A simulation study has been carried out to investigate the behavior of varentropy in shrinking interval for simulated and real-life data sets. Furthermore, applications related to the choice of most acceptable system and the first-passage times of an Ornstein–Uhlenbeck jump-diffusion process are illustrated.
Suppose that we have a method which estimates the conditional probabilities of some unknown stochastic source and we use it to guess which of the outcomes will happen. We want to make a correct guess as often as it is possible. What estimators are good for this? In this work, we consider estimators given by a familiar notion of universal coding for stationary ergodic measures, while working in the framework of algorithmic randomness, i.e., we are particularly interested in prediction of Martin-Löf random points. We outline the general theory and exhibit some counterexamples. Completing a result of Ryabko from 2009 we also show that universal probability measure in the sense of universal coding induces a universal predictor in the prequential sense. Surprisingly, this implication holds true provided the universal measure does not ascribe too low conditional probabilities to individual symbols. As an example, we show that the Prediction by Partial Matching (PPM) measure satisfies this requirement with a large reserve.
In this paper, the problem of restoration of cloud contaminated optical images is studied in the case when we have no information about brightness of such images in the damage region. We propose a new variational approach for exact restoration of optical multi-band images utilising Synthetic Aperture Radar (EOS – Spatial Data Analytics, GIS Software, Satellite Imagery – is a cloud-based platform to derive remote sensing data and analyse satellite imagery for business and science purposes) images of the same regions. We prove existence of solutions, propose an alternating minimisation method for computing them, prove convergence of this method to weak solutions of the original problem and derive optimality conditions.
This paper concentrates on the fundamental concepts of entropy, information and divergence to the case where the distribution function and the respective survival function play the central role in their definition. The main aim is to provide an overview of these three categories of measures of information and their cumulative and survival counterparts. It also aims to introduce and discuss Csiszár's type cumulative and survival divergences and the analogous Fisher's type information on the basis of cumulative and survival functions.
Two-sided bounds are explored for concentration functions and Rényi entropies in the class of discrete log-concave probability distributions. They are used to derive certain variants of the entropy power inequalities.
Several authors have investigated the question of whether canonical logic-based accounts of belief revision, and especially the theory of AGM revision operators, are compatible with the dynamics of Bayesian conditioning. Here we show that Leitgeb’s stability rule for acceptance, which has been offered as a possible solution to the Lottery paradox, allows to bridge AGM revision and Bayesian update: using the stability rule, we prove that AGM revision operators emerge from Bayesian conditioning by an application of the principle of maximum entropy. In situations of information loss, or whenever the agent relies on a qualitative description of her information state—such as a plausibility ranking over hypotheses, or a belief set—the dynamics of AGM belief revision are compatible with Bayesian conditioning; indeed, through the maximum entropy principle, conditioning naturally generates AGM revision operators. This mitigates an impossibility theorem of Lin and Kelly for tracking Bayesian conditioning with AGM revision, and suggests an approach to the compatibility problem that highlights the information loss incurred by acceptance rules in passing from probabilistic to qualitative representations of belief.
We prove that if f and g are holomorphic functions on an open connected domain, with the same moduli on two intersecting segments, then
$f=g$
up to the multiplication of a unimodular constant, provided the segments make an angle that is an irrational multiple of
$\pi $
. We also prove that if f and g are functions in the Nevanlinna class, and if
$|f|=|g|$
on the unit circle and on a circle inside the unit disc, then
$f=g$
up to the multiplication of a unimodular constant.
Furstenberg [Disjointness in ergodic theory, minimal sets, and a problem in Diophantine approximation. Math. Syst. Theory1 (1967), 1–49] calculated the Hausdorff and Minkowski dimensions of one-sided subshifts in terms of topological entropy. We generalize this to $\mathbb{Z}^{2}$-subshifts. Our generalization involves mean dimension theory. We calculate the metric mean dimension and the mean Hausdorff dimension of $\mathbb{Z}^{2}$-subshifts with respect to a subaction of $\mathbb{Z}$. The resulting formula is quite analogous to Furstenberg’s theorem. We also calculate the rate distortion dimension of $\mathbb{Z}^{2}$-subshifts in terms of Kolmogorov–Sinai entropy.
We prove an essentially sharp
$\tilde \Omega (n/k)$
lower bound on the k-round distributional complexity of the k-step pointer chasing problem under the uniform distribution, when Bob speaks first. This is an improvement over Nisan and Wigderson’s
$\tilde \Omega (n/{k^2})$
lower bound, and essentially matches the randomized lower bound proved by Klauck. The proof is information-theoretic, and a key part of it is using asymmetric triangular discrimination instead of total variation distance; this idea may be useful elsewhere.
The linear complexity and the error linear complexity are two important security measures for stream ciphers. We construct periodic sequences from function fields and show that the error linear complexity of these periodic sequences is large. We also give a lower bound for the error linear complexity of a class of nonperiodic sequences.
We prove that an $L^{\infty }$ potential in the Schrödinger equation in three and higher dimensions can be uniquely determined from a finite number of boundary measurements, provided it belongs to a known finite dimensional subspace ${\mathcal{W}}$. As a corollary, we obtain a similar result for Calderón’s inverse conductivity problem. Lipschitz stability estimates and a globally convergent nonlinear reconstruction algorithm for both inverse problems are also presented. These are the first results on global uniqueness, stability and reconstruction for nonlinear inverse boundary value problems with finitely many measurements. We also discuss a few relevant examples of finite dimensional subspaces ${\mathcal{W}}$, including bandlimited and piecewise constant potentials, and explicitly compute the number of required measurements as a function of $\dim {\mathcal{W}}$.
This paper provides a functional analogue of the recently initiated dual Orlicz–Brunn–Minkowski theory for star bodies. We first propose the Orlicz addition of measures, and establish the dual functional Orlicz–Brunn–Minkowski inequality. Based on a family of linear Orlicz additions of two measures, we provide an interpretation for the famous $f$-divergence. Jensen’s inequality for integrals is also proved to be equivalent to the newly established dual functional Orlicz–Brunn–Minkowski inequality. An optimization problem for the $f$-divergence is proposed, and related functional affine isoperimetric inequalities are established.
In this paper, we introduce two notions of a relative operator (α, β)-entropy and a Tsallis relative operator (α, β)-entropy as two parameter extensions of the relative operator entropy and the Tsallis relative operator entropy. We apply a perspective approach to prove the joint convexity or concavity of these new notions, under certain conditions concerning α and β. Indeed, we give the parametric extensions, but in such a manner that they remain jointly convex or jointly concave.
Significance Statement. What is novel here is that we convincingly demonstrate how our techniques can be used to give simple proofs for the old and new theorems for the functions that are relevant to quantum statistics. Our proof strategy shows that the joint convexity of the perspective of some functions plays a crucial role to give simple proofs for the joint convexity (resp. concavity) of some relative operator entropies.
We describe how to approximate fractal transformations generated by a one-parameter family of dynamical systems $W:[0,1]\rightarrow [0,1]$ constructed from a pair of monotone increasing diffeomorphisms $W_{i}$ such that $W_{i}^{-1}:[0,1]\rightarrow [0,1]$ for $i=0,1$. An algorithm is provided for determining the unique parameter value such that the closure of the symbolic attractor $\overline{\unicode[STIX]{x1D6FA}}$ is symmetrical. Several examples are given, one in which the $W_{i}$ are affine and two in which the $W_{i}$ are nonlinear. Applications to digital imaging are also discussed.