We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We prove that if f and g are holomorphic functions on an open connected domain, with the same moduli on two intersecting segments, then
$f=g$
up to the multiplication of a unimodular constant, provided the segments make an angle that is an irrational multiple of
$\pi $
. We also prove that if f and g are functions in the Nevanlinna class, and if
$|f|=|g|$
on the unit circle and on a circle inside the unit disc, then
$f=g$
up to the multiplication of a unimodular constant.
Furstenberg [Disjointness in ergodic theory, minimal sets, and a problem in Diophantine approximation. Math. Syst. Theory1 (1967), 1–49] calculated the Hausdorff and Minkowski dimensions of one-sided subshifts in terms of topological entropy. We generalize this to
$\mathbb{Z}^{2}$
-subshifts. Our generalization involves mean dimension theory. We calculate the metric mean dimension and the mean Hausdorff dimension of
$\mathbb{Z}^{2}$
-subshifts with respect to a subaction of
$\mathbb{Z}$
. The resulting formula is quite analogous to Furstenberg’s theorem. We also calculate the rate distortion dimension of
$\mathbb{Z}^{2}$
-subshifts in terms of Kolmogorov–Sinai entropy.
We prove an essentially sharp
$\tilde \Omega (n/k)$
lower bound on the k-round distributional complexity of the k-step pointer chasing problem under the uniform distribution, when Bob speaks first. This is an improvement over Nisan and Wigderson’s
$\tilde \Omega (n/{k^2})$
lower bound, and essentially matches the randomized lower bound proved by Klauck. The proof is information-theoretic, and a key part of it is using asymmetric triangular discrimination instead of total variation distance; this idea may be useful elsewhere.
The linear complexity and the error linear complexity are two important security measures for stream ciphers. We construct periodic sequences from function fields and show that the error linear complexity of these periodic sequences is large. We also give a lower bound for the error linear complexity of a class of nonperiodic sequences.
This paper provides a functional analogue of the recently initiated dual Orlicz–Brunn–Minkowski theory for star bodies. We first propose the Orlicz addition of measures, and establish the dual functional Orlicz–Brunn–Minkowski inequality. Based on a family of linear Orlicz additions of two measures, we provide an interpretation for the famous
$f$
-divergence. Jensen’s inequality for integrals is also proved to be equivalent to the newly established dual functional Orlicz–Brunn–Minkowski inequality. An optimization problem for the
$f$
-divergence is proposed, and related functional affine isoperimetric inequalities are established.
In this paper, we introduce two notions of a relative operator (α, β)-entropy and a Tsallis relative operator (α, β)-entropy as two parameter extensions of the relative operator entropy and the Tsallis relative operator entropy. We apply a perspective approach to prove the joint convexity or concavity of these new notions, under certain conditions concerning α and β. Indeed, we give the parametric extensions, but in such a manner that they remain jointly convex or jointly concave.
Significance Statement. What is novel here is that we convincingly demonstrate how our techniques can be used to give simple proofs for the old and new theorems for the functions that are relevant to quantum statistics. Our proof strategy shows that the joint convexity of the perspective of some functions plays a crucial role to give simple proofs for the joint convexity (resp. concavity) of some relative operator entropies.
We describe how to approximate fractal transformations generated by a one-parameter family of dynamical systems
$W:[0,1]\rightarrow [0,1]$
constructed from a pair of monotone increasing diffeomorphisms
$W_{i}$
such that
$W_{i}^{-1}:[0,1]\rightarrow [0,1]$
for
$i=0,1$
. An algorithm is provided for determining the unique parameter value such that the closure of the symbolic attractor
$\overline{\unicode[STIX]{x1D6FA}}$
is symmetrical. Several examples are given, one in which the
$W_{i}$
are affine and two in which the
$W_{i}$
are nonlinear. Applications to digital imaging are also discussed.
We prove that an
$L^{\infty }$
potential in the Schrödinger equation in three and higher dimensions can be uniquely determined from a finite number of boundary measurements, provided it belongs to a known finite dimensional subspace
${\mathcal{W}}$
. As a corollary, we obtain a similar result for Calderón’s inverse conductivity problem. Lipschitz stability estimates and a globally convergent nonlinear reconstruction algorithm for both inverse problems are also presented. These are the first results on global uniqueness, stability and reconstruction for nonlinear inverse boundary value problems with finitely many measurements. We also discuss a few relevant examples of finite dimensional subspaces
${\mathcal{W}}$
, including bandlimited and piecewise constant potentials, and explicitly compute the number of required measurements as a function of
$\dim {\mathcal{W}}$
.
In modelling joint probability distributions it is often desirable to incorporate standard marginal distributions and match a set of key observed mixed moments. At the same time it may also be prudent to avoid additional unwarranted assumptions. The problem is to find the least ordered distribution that respects the prescribed constraints. In this paper we will construct a suitable joint probability distribution by finding the checkerboard copula of maximum entropy that allows us to incorporate the appropriate marginal distributions and match the nominated set of observed moments.
The Shannon entropy based on the probability density function is a key information measure with applications in different areas. Some alternative information measures have been proposed in the literature. Two relevant ones are the cumulative residual entropy (based on the survival function) and the cumulative past entropy (based on the distribution function). Recently, some extensions of these measures have been proposed. Here, we obtain some properties for the generalized cumulative past entropy. In particular, we prove that it determines the underlying distribution. We also study this measure in coherent systems and a closely related generalized past cumulative Kerridge inaccuracy measure.
In this paper, we perform a detailed spectral study of the liberation process associated with two symmetries of arbitrary ranks:
$(R,S)\mapsto (R,U_{t}SU_{t}^{\ast })_{t\geqslant 0}$
, where
$(U_{t})_{t\geqslant 0}$
is a free unitary Brownian motion freely independent from
$\{R,S\}$
. Our main tool is free stochastic calculus which allows to derive a partial differential equation (PDE) for the Herglotz transform of the unitary process defined by
$Y_{t}:=RU_{t}SU_{t}^{\ast }$
. It turns out that this is exactly the PDE governing the flow of an analytic function transform of the spectral measure of the operator
$X_{t}:=PU_{t}QU_{t}^{\ast }P$
where
$P,Q$
are the orthogonal projections associated to
$R,S$
. Next, we relate the two spectral measures of
$RU_{t}SU_{t}^{\ast }$
and of
$PU_{t}QU_{t}^{\ast }P$
via their moment sequences and use this relationship to develop a theory of subordination for the boundary values of the Herglotz transform. In particular, we explicitly compute the subordinate function and extend its inverse continuously to the unit circle. As an application, we prove the identity
$i^{\ast }(\mathbb{C}P+\mathbb{C}(I-P);\mathbb{C}Q+\mathbb{C}(I-Q))=-\unicode[STIX]{x1D712}_{\text{orb}}(P,Q)$
.
The proportional hazards (PH) model and its associated distributions provide suitable media for exploring connections between the Gini coefficient, Fisher information, and Shannon entropy. The connecting threads are Bayes risks of the mean excess of a random variable with the PH distribution and Bayes risks of the Fisher information of the equilibrium distribution of the PH model. Under various priors, these Bayes risks are generalized entropy functionals of the survival functions of the baseline and PH models and the expected asymptotic age of the renewal process with the PH renewal time distribution. Bounds for a Bayes risk of the mean excess and the Gini's coefficient are given. The Shannon entropy integral of the equilibrium distribution of the PH model is represented in derivative forms. Several examples illustrate implementation of the results and provide insights for potential applications.
Recently, Rao et al. (2004) introduced an alternative measure of uncertainty known as the cumulative residual entropy (CRE). It is based on the survival (reliability) function F̅ instead of the probability density function f used in classical Shannon entropy. In reliability based system design, the performance characteristics of the coherent systems are of great importance. Accordingly, in this paper, we study the CRE for coherent and mixed systems when the component lifetimes are identically distributed. Bounds for the CRE of the system lifetime are obtained. We use these results to propose a measure to study if a system is close to series and parallel systems of the same size. Our results suggest that the CRE can be viewed as an alternative entropy (dispersion) measure to classical Shannon entropy.
Image inpainting methods recover true images from partial noisy observations. Natural images usually have two layers consisting of cartoons and textures. Methods using simultaneous cartoon and texture inpainting are popular in the literature by using two combined tight frames: one (often built from wavelets, curvelets or shearlets) provides sparse representations for cartoons and the other (often built from discrete cosine transforms) offers sparse approximation for textures. Inspired by the recent development on directional tensor product complex tight framelets ($\text{TP}\text{-}\mathbb{C}\text{TF}$s) and their impressive performance for the image denoising problem, we propose an iterative thresholding algorithm using tight frames derived from $\text{TP}\text{-}\mathbb{C}\text{TF}$s for the image inpainting problem. The tight frame $\text{TP}\text{-}\mathbb{C}\text{TF}_{6}$ contains two classes of framelets; one is good for cartoons and the other is good for textures. Therefore, it can handle both the cartoons and the textures well. For the image inpainting problem with additive zero-mean independent and identically distributed Gaussian noise, our proposed algorithm does not require us to tune parameters manually for reasonably good performance. Experimental results show that our proposed algorithm performs comparatively better than several well-known frame systems for the image inpainting problem.
Recently, many variational models involving high order derivatives have been widely used in image processing, because they can reduce staircase effects during noise elimination. However, it is very challenging to construct efficient algorithms to obtain the minimizers of original high order functionals. In this paper, we propose a new linearized augmented Lagrangian method for Euler's elastica image denoising model. We detail the procedures of finding the saddle-points of the augmented Lagrangian functional. Instead of solving associated linear systems by FFT or linear iterative methods (e.g., the Gauss-Seidel method), we adopt a linearized strategy to get an iteration sequence so as to reduce computational cost. In addition, we give some simple complexity analysis for the proposed method. Experimental results with comparison to the previous method are supplied to demonstrate the efficiency of the proposed method, and indicate that such a linearized augmented Lagrangian method is more suitable to deal with large-sized images.
We propose a new two-phase method for reconstruction of blurred images corrupted by impulse noise. In the first phase, we use a noise detector to identify the pixels that are contaminated by noise, and then, in the second phase, we reconstruct the noisy pixels by solving an equality constrained total variation minimization problem that preserves the exact values of the noise-free pixels. For images that are only corrupted by impulse noise (i.e., not blurred) we apply the semismooth Newton's method to a reduced problem, and if the images are also blurred, we solve the equality constrained reconstruction problem using a first-order primal-dual algorithm. The proposed model improves the computational efficiency (in the denoising case) and has the advantage of being regularization parameter-free. Our numerical results suggest that the method is competitive in terms of its restoration capabilities with respect to the other two-phase methods.
This paper presents a framework for compressed sensing that bridges a gap between existing theory and the current use of compressed sensing in many real-world applications. In doing so, it also introduces a new sampling method that yields substantially improved recovery over existing techniques. In many applications of compressed sensing, including medical imaging, the standard principles of incoherence and sparsity are lacking. Whilst compressed sensing is often used successfully in such applications, it is done largely without mathematical explanation. The framework introduced in this paper provides such a justification. It does so by replacing these standard principles with three more general concepts: asymptotic sparsity, asymptotic incoherence and multilevel random subsampling. Moreover, not only does this work provide such a theoretical justification, it explains several key phenomena witnessed in practice. In particular, and unlike the standard theory, this work demonstrates the dependence of optimal sampling strategies on both the incoherence structure of the sampling operator and on the structure of the signal to be recovered. Another key consequence of this framework is the introduction of a new structured sampling method that exploits these phenomena to achieve significant improvements over current state-of-the-art techniques.
Consider a family of Boolean models, indexed by integers n≥1. The nth model features a Poisson point process in ℝn of intensity e{nρn}, and balls of independent and identically distributed radii distributed like X̅n√n. Assume that ρn→ρ as n→∞, and that X̅n satisfies a large deviations principle. We show that there then exist the three deterministic thresholds τd, the degree threshold, τp, the percolation probability threshold, and τv, the volume fraction threshold, such that, asymptotically as n tends to ∞, we have the following features. (i) For ρ<τd, almost every point is isolated, namely its ball intersects no other ball; (ii) for τd<ρ<τp, the mean number of balls intersected by a typical ball converges to ∞ and nevertheless there is no percolation; (iii) for τp<ρ<τv, the volume fraction is 0 and nevertheless percolation occurs; (iv) for τd<ρ<τv, the mean number of balls intersected by a typical ball converges to ∞ and nevertheless the volume fraction is 0; (v) for ρ>τv, the whole space is covered. The analysis of this asymptotic regime is motivated by problems in information theory, but it could be of independent interest in stochastic geometry. The relations between these three thresholds and the Shannon‒Poltyrev threshold are discussed.
NTRU is a public-key cryptosystem introduced at ANTS-III. The two most used techniques in attacking the NTRU private key are meet-in-the-middle attacks and lattice-basis reduction attacks. Howgrave-Graham combined both techniques in 2007 and pointed out that the largest obstacle to attacks is the memory capacity that is required for the meet-in-the-middle phase. In the present paper an algorithm is presented that applies low-memory techniques to find ‘golden’ collisions to Odlyzko’s meet-in-the-middle attack against the NTRU private key. Several aspects of NTRU secret keys and the algorithm are analysed. The running time of the algorithm with a maximum storage capacity of
$w$
is estimated and experimentally verified. Experiments indicate that decreasing the storage capacity
$w$
by a factor
$1<c<\sqrt{w}$
increases the running time by a factor
$\sqrt{c}$
.