We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Let $\mathbb {F}$ be a field and $(s_0,\ldots ,s_{n-1})$ be a finite sequence of elements of $\mathbb {F}$. In an earlier paper [G. H. Norton, ‘On the annihilator ideal of an inverse form’, J. Appl. Algebra Engrg. Comm. Comput.28 (2017), 31–78], we used the $\mathbb {F}[x,z]$ submodule $\mathbb {F}[x^{-1},z^{-1}]$ of Macaulay’s inverse system $\mathbb {F}[[x^{-1},z^{-1}]]$ (where z is our homogenising variable) to construct generating forms for the (homogeneous) annihilator ideal of $(s_0,\ldots ,s_{n-1})$. We also gave an $\mathcal {O}(n^2)$ algorithm to compute a special pair of generating forms of such an annihilator ideal. Here we apply this approach to the sequence r of the title. We obtain special forms generating the annihilator ideal for $(r_0,\ldots ,r_{n-1})$ without polynomial multiplication or division, so that the algorithm becomes linear. In particular, we obtain its linear complexities. We also give additional applications of this approach.
This paper focuses on the fundamental aspects of super-resolution, particularly addressing the stability of super-resolution and the estimation of two-point resolution. Our first major contribution is the introduction of two location-amplitude identities that characterize the relationships between locations and amplitudes of true and recovered sources in the one-dimensional super-resolution problem. These identities facilitate direct derivations of the super-resolution capabilities for recovering the number, location, and amplitude of sources, significantly advancing existing estimations to levels of practical relevance. As a natural extension, we establish the stability of a specific $l_{0}$ minimization algorithm in the super-resolution problem.
The second crucial contribution of this paper is the theoretical proof of a two-point resolution limit in multi-dimensional spaces. The resolution limit is expressed as
$$\begin{align*}\mathscr R = \frac{4\arcsin \left(\left(\frac{\sigma}{m_{\min}}\right)^{\frac{1}{2}} \right)}{\Omega} \end{align*}$$
for ${\frac {\sigma }{m_{\min }}}{\leqslant }{\frac {1}{2}}$, where ${\frac {\sigma }{m_{\min }}}$ represents the inverse of the signal-to-noise ratio (${\mathrm {SNR}}$) and $\Omega $ is the cutoff frequency. It also demonstrates that for resolving two point sources, the resolution can exceed the Rayleigh limit ${\frac {\pi }{\Omega }}$ when the signal-to-noise ratio (SNR) exceeds $2$. Moreover, we find a tractable algorithm that achieves the resolution ${\mathscr {R}}$ when distinguishing two sources.
Let n be a positive integer and $\underline {n}=\{1,2,\ldots ,n\}$. A conjecture arising from certain polynomial near-ring codes states that if $k\geq 1$ and $a_{1},a_{2},\ldots ,a_{k}$ are distinct positive integers, then the symmetric difference $a_{1}\underline {n}\mathbin {\Delta }a_{2}\underline {n}\mathbin {\Delta }\cdots \mathbin {\Delta }a_{k}\underline {n}$ contains at least n elements. Here, $a_{i}\underline {n}=\{a_{i},2a_{i},\ldots ,na_{i}\}$ for each i. We prove this conjecture for arbitrary n and for $k=1,2,3$.
Questions of ‘how best to acquire data’ are essential to modelling and prediction in the natural and social sciences, engineering applications, and beyond. Optimal experimental design (OED) formalizes these questions and creates computational methods to answer them. This article presents a systematic survey of modern OED, from its foundations in classical design theory to current research involving OED for complex models. We begin by reviewing criteria used to formulate an OED problem and thus to encode the goal of performing an experiment. We emphasize the flexibility of the Bayesian and decision-theoretic approach, which encompasses information-based criteria that are well-suited to nonlinear and non-Gaussian statistical models. We then discuss methods for estimating or bounding the values of these design criteria; this endeavour can be quite challenging due to strong nonlinearities, high parameter dimension, large per-sample costs, or settings where the model is implicit. A complementary set of computational issues involves optimization methods used to find a design; we discuss such methods in the discrete (combinatorial) setting of observation selection and in settings where an exact design can be continuously parametrized. Finally we present emerging methods for sequential OED that build non-myopic design policies, rather than explicit designs; these methods naturally adapt to the outcomes of past experiments in proposing new experiments, while seeking coordination among all experiments to be performed. Throughout, we highlight important open questions and challenges.
Information generating functions (IGFs) have been of great interest to researchers due to their ability to generate various information measures. The IGF of an absolutely continuous random variable (see Golomb, S. (1966). The information generating function of a probability distribution. IEEE Transactions in Information Theory, 12(1), 75–77) depends on its density function. But, there are several models with intractable cumulative distribution functions, but do have explicit quantile functions. For this reason, in this work, we propose quantile version of the IGF, and then explore some of its properties. Effect of increasing transformations on it is then studied. Bounds are also obtained. The proposed generating function is studied especially for escort and generalized escort distributions. Some connections between the quantile-based IGF (Q-IGF) order and well-known stochastic orders are established. Finally, the proposed Q-IGF is extended for residual and past lifetimes as well. Several examples are presented through out to illustrate the theoretical results established here. An inferential application of the proposed methodology is also discussed
Let f and g be analytic functions on the open unit disk ${\mathbb D}$ such that $|f|=|g|$ on a set A. We give an alternative proof of the result of Perez that there exists c in the unit circle ${\mathbb T}$ such that $f=cg$ when A is the union of two lines in ${\mathbb D}$ intersecting at an angle that is an irrational multiple of $\pi $, and from this, deduce a sequential generalization of the result. Similarly, the same conclusion is valid when f and g are in the Nevanlinna class and A is the union of the unit circle and an interior circle, tangential or not. We also provide sequential versions of this result and analyze the case $A=r{\mathbb T}$. Finally, we examine the most general situation when there is equality on two distinct circles in the disk, proving a result or counterexample for each possible configuration.
We investigate the convergence rate of multi-marginal optimal transport costs that are regularized with the Boltzmann–Shannon entropy, as the noise parameter $\varepsilon $ tends to $0$. We establish lower and upper bounds on the difference with the unregularized cost of the form $C\varepsilon \log (1/\varepsilon )+O(\varepsilon )$ for some explicit dimensional constants C depending on the marginals and on the ground cost, but not on the optimal transport plans themselves. Upper bounds are obtained for Lipschitz costs or locally semiconcave costs for a finer estimate, and lower bounds for $\mathscr {C}^2$ costs satisfying some signature condition on the mixed second derivatives that may include degenerate costs, thus generalizing results previously in the two marginals case and for nondegenerate costs. We obtain in particular matching bounds in some typical situations where the optimal plan is deterministic.
We generalize to a broader class of decoupled measures a result of Ziv and Merhav on universal estimation of the specific cross (or relative) entropy, originally for a pair of multilevel Markov measures. Our generalization focuses on abstract decoupling conditions and covers pairs of suitably regular g-measures and pairs of equilibrium measures arising from the “small space of interactions” in mathematical statistical mechanics.
The Hoffman ratio bound, Lovász theta function, and Schrijver theta function are classical upper bounds for the independence number of graphs, which are useful in graph theory, extremal combinatorics, and information theory. By using generalized inverses and eigenvalues of graph matrices, we give bounds for independence sets and the independence number of graphs. Our bounds unify the Lovász theta function, Schrijver theta function, and Hoffman-type bounds, and we obtain the necessary and sufficient conditions of graphs attaining these bounds. Our work leads to some simple structural and spectral conditions for determining a maximum independent set, the independence number, the Shannon capacity, and the Lovász theta function of a graph.
In analogy to classical spherical t-design points, we introduce the concept of t-design curves on the sphere. This means that the line integral along a t-design curve integrates polynomials of degree t exactly. For low degrees, we construct explicit examples. We also derive lower asymptotic bounds on the lengths of t-design curves. Our main results prove the existence of asymptotically optimal t-design curves in the Euclidean $2$-sphere and the existence of t-design curves in the d-sphere.
It is proven that a conjecture of Tao (2010) holds true for log-concave random variables on the integers: For every $n \geq 1$, if $X_1,\ldots,X_n$ are i.i.d. integer-valued, log-concave random variables, then
as $H(X_1) \to \infty$, where $H(X_1)$ denotes the (discrete) Shannon entropy. The problem is reduced to the continuous setting by showing that if $U_1,\ldots,U_n$ are independent continuous uniforms on $(0,1)$, then
In this paper, we first give a necessary and sufficient condition for a factor code with an unambiguous symbol to admit a subshift of finite type restricted to which it is one-to-one and onto. We then give a necessary and sufficient condition for the standard factor code on a spoke graph to admit a subshift of finite type restricted to which it is finite-to-one and onto. We also conjecture that for such a code, the finite-to-one and onto property is equivalent to the existence of a stationary Markov chain that achieves the capacity of the corresponding deterministic channel.
Measures of uncertainty are a topic of considerable and growing interest. Recently, the introduction of extropy as a measure of uncertainty, dual to Shannon entropy, has opened up interest in new aspects of the subject. Since there are many versions of entropy, a unified formulation has been introduced to work with all of them in an easy way. Here we consider the possibility of defining a unified formulation for extropy by introducing a measure depending on two parameters. For particular choices of parameters, this measure provides the well-known formulations of extropy. Moreover, the unified formulation of extropy is also analyzed in the context of the Dempster–Shafer theory of evidence, and an application to classification problems is given.
We prove a generalization of Krieger’s embedding theorem, in the spirit of zero-error information theory. Specifically, given a mixing shift of finite type X, a mixing sofic shift Y, and a surjective sliding block code $\pi : X \to Y$, we give necessary and sufficient conditions for a subshift Z of topological entropy strictly lower than that of Y to admit an embedding $\psi : Z \to X$ such that $\pi \circ \psi $ is injective.
We show how convergence to the Gumbel distribution in an extreme value setting can be understood in an information-theoretic sense. We introduce a new type of score function which behaves well under the maximum operation, and which implies simple expressions for entropy and relative entropy. We show that, assuming certain properties of the von Mises representation, convergence to the Gumbel distribution can be proved in the strong sense of relative entropy.
The basic idea of voting protocols is that nodes query a sample of other nodes and adjust their own opinion throughout several rounds based on the proportion of the sampled opinions. In the classic model, it is assumed that all nodes have the same weight. We study voting protocols for heterogeneous weights with respect to fairness. A voting protocol is fair if the influence on the eventual outcome of a given participant is linear in its weight. Previous work used sampling with replacement to construct a fair voting scheme. However, it was shown that using greedy sampling, i.e., sampling with replacement until a given number of distinct elements is chosen, turns out to be more robust and performant.
In this paper, we study fairness of voting protocols with greedy sampling and propose a voting scheme that is asymptotically fair for a broad class of weight distributions. We complement our theoretical findings with numerical results and present several open questions and conjectures.
An extension of Shannon’s entropy power inequality when one of the summands is Gaussian was provided by Costa in 1985, known as Costa’s concavity inequality. We consider the additive Gaussian noise channel with a more realistic assumption, i.e. the input and noise components are not independent and their dependence structure follows the well-known multivariate Gaussian copula. Two generalizations for the first- and second-order derivatives of the differential entropy of the output signal for dependent multivariate random variables are derived. It is shown that some previous results in the literature are particular versions of our results. Using these derivatives, concavity of the entropy power, under certain mild conditions, is proved. Finally, special one-dimensional versions of our general results are described which indeed reveal an extension of the one-dimensional case of Costa’s concavity inequality to the dependent case. An illustrative example is also presented.
The principle of maximum entropy is a well-known approach to produce a model for data-generating distributions. In this approach, if partial knowledge about the distribution is available in terms of a set of information constraints, then the model that maximizes entropy under these constraints is used for the inference. In this paper, we propose a new three-parameter lifetime distribution using the maximum entropy principle under the constraints on the mean and a general index. We then present some statistical properties of the new distribution, including hazard rate function, quantile function, moments, characterization, and stochastic ordering. We use the maximum likelihood estimation technique to estimate the model parameters. A Monte Carlo study is carried out to evaluate the performance of the estimation method. In order to illustrate the usefulness of the proposed model, we fit the model to three real data sets and compare its relative performance with respect to the beta generalized Weibull family.
For a random binary noncoalescing feedback shift register of width
$n$
, with all
$2^{2^{n-1}}$
possible feedback functions
$f$
equally likely, the process of long cycle lengths, scaled by dividing by
$N=2^n$
, converges in distribution to the same Poisson–Dirichlet limit as holds for random permutations in
$\mathcal{S}_N$
, with all
$N!$
possible permutations equally likely. Such behaviour was conjectured by Golomb, Welch and Goldstein in 1959.
Sonar systems are frequently used to classify objects at a distance by using the structure of the echoes of acoustic waves as a proxy for the object’s shape and composition. Traditional synthetic aperture processing is highly effective in solving classification problems when the conditions are favourable but relies on accurate knowledge of the sensor’s trajectory relative to the object being measured. This article provides several new theoretical tools that decouple object classification performance from trajectory estimation in synthetic aperture sonar processing. The key insight is that decoupling the trajectory from classification-relevant information involves factoring a function into the composition of two functions. The article presents several new general topological invariants for smooth functions based on their factorisations over function composition. These invariants specialise to the case when a sonar platform trajectory is deformed by a non-small perturbation. The mathematical results exhibited in this article apply well beyond sonar classification problems. This article is written in a way that supports full mathematical generality.