We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We show that the image of a subshift X under various injective morphisms of symbolic algebraic varieties over monoid universes with algebraic variety alphabets is a subshift of finite type, respectively a sofic subshift, if and only if so is X. Similarly, let G be a countable monoid and let A, B be Artinian modules over a ring. We prove that for every closed subshift submodule
$\Sigma \subset A^G$
and every injective G-equivariant uniformly continuous module homomorphism
$\tau \colon \! \Sigma \to B^G$
, a subshift
$\Delta \subset \Sigma $
is of finite type, respectively sofic, if and only if so is the image
$\tau (\Delta )$
. Generalizations for admissible group cellular automata over admissible Artinian group structure alphabets are also obtained.
The ring $\mathbb Z_{d}$ of d-adic integers has a natural interpretation as the boundary of a rooted d-ary tree $T_{d}$. Endomorphisms of this tree (that is, solenoidal maps) are in one-to-one correspondence with 1-Lipschitz mappings from $\mathbb Z_{d}$ to itself. In the case when $d=p$ is prime, Anashin [‘Automata finiteness criterion in terms of van der Put series of automata functions’,p-Adic Numbers Ultrametric Anal. Appl.4(2) (2012), 151–160] showed that $f\in \mathrm {Lip}^{1}(\mathbb Z_{p})$ is defined by a finite Mealy automaton if and only if the reduced coefficients of its van der Put series constitute a p-automatic sequence over a finite subset of $\mathbb Z_{p}\cap \mathbb Q$. We generalize this result to arbitrary integers $d\geq 2$ and describe the explicit connection between the Moore automaton producing such a sequence and the Mealy automaton inducing the corresponding endomorphism of a rooted tree. We also produce two algorithms converting one automaton to the other and vice versa. As a demonstration, we apply our algorithms to the Thue–Morse sequence and to one of the generators of the lamplighter group acting on the binary rooted tree.
The Thue–Morse sequence is a prototypical automatic sequence found in diverse areas of mathematics, and in computer science. We study occurrences of factors w within this sequence, or more precisely, the sequence of gaps between consecutive occurrences. This gap sequence is morphic; we prove that it is not automatic as soon as the length of w is at least
$2$
, thereby answering a question by J. Shallit in the affirmative. We give an explicit method to compute the discrepancy of the number of occurrences of the block
$\mathtt {01}$
in the Thue–Morse sequence. We prove that the sequence of discrepancies is the sequence of output sums of a certain base-
$2$
transducer.
An oracle A is low-for-speed if it is unable to speed up the computation of a set which is already computable: if a decidable language can be decided in time
$t(n)$
using A as an oracle, then it can be decided without an oracle in time
$p(t(n))$
for some polynomial p. The existence of a set which is low-for-speed was first shown by Bayer and Slaman who constructed a non-computable computably enumerable set which is low-for-speed. In this paper we answer a question previously raised by Bienvenu and Downey, who asked whether there is a minimal degree which is low-for-speed. The standard method of constructing a set of minimal degree via forcing is incompatible with making the set low-for-speed; but we are able to use an interesting new combination of forcing and full approximation to construct a set which is both of minimal degree and low-for-speed.
In numerical linear algebra, a well-established practice is to choose a norm that exploits the structure of the problem at hand to optimise accuracy or computational complexity. In numerical polynomial algebra, a single norm (attributed to Weyl) dominates the literature. This article initiates the use of
$L_p$
norms for numerical algebraic geometry, with an emphasis on
$L_{\infty }$
. This classical idea yields strong improvements in the analysis of the number of steps performed by numerous iterative algorithms. In particular, we exhibit three algorithms where, despite the complexity of computing
$L_{\infty }$
-norm, the use of
$L_p$
-norms substantially reduces computational complexity: a subdivision-based algorithm in real algebraic geometry for computing the homology of semialgebraic sets, a well-known meshing algorithm in computational geometry and the computation of zeros of systems of complex quadratic polynomials (a particular case of Smale’s 17th problem).
Combinatorial samplers are algorithmic schemes devised for the approximate- and exact-size generation of large random combinatorial structures, such as context-free words, various tree-like data structures, maps, tilings, RNA molecules. They can be adapted to combinatorial specifications with additional parameters, allowing for a more flexible control over the output profile of parametrised combinatorial patterns. One can control, for instance, the number of leaves, profile of node degrees in trees or the number of certain sub-patterns in generated strings. However, such a flexible control requires an additional and nontrivial tuning procedure. Using techniques of convex optimisation, we present an efficient tuning algorithm for multi-parametric combinatorial specifications. Our algorithm works in polynomial time in the system description length, the number of tuning parameters, the number of combinatorial classes in the specification, and the logarithm of the total target size. We demonstrate the effectiveness of our method on a series of practical examples, including rational, algebraic, and so-called Pólya specifications. We show how our method can be adapted to a broad range of less typical combinatorial constructions, including symmetric polynomials, labelled sets and cycles with cardinality lower bounds, simple increasing trees or substitutions. Finally, we discuss some practical aspects of our prototype tuner implementation and provide its benchmark results.
In [20] Krajíček and Pudlák discovered connections between problems in computational complexity and the lengths of first-order proofs of finite consistency statements. Later Pudlák [25] studied more statements that connect provability with computational complexity and conjectured that they are true. All these conjectures are at least as strong as
$\mathsf {P}\neq \mathsf {NP}$
[23–25].One of the problems concerning these conjectures is to find out how tightly they are connected with statements about computational complexity classes. Results of this kind had been proved in [20, 22].In this paper, we generalize and strengthen these results. Another question that we address concerns the dependence between these conjectures. We construct two oracles that enable us to answer questions about relativized separations asked in [19, 25] (i.e., for the pairs of conjectures mentioned in the questions, we construct oracles such that one conjecture from the pair is true in the relativized world and the other is false and vice versa). We also show several new connections between the studied conjectures. In particular, we show that the relation between the finite reflection principle and proof systems for existentially quantified Boolean formulas is similar to the one for finite consistency statements and proof systems for non-quantified propositional tautologies.
In this survey we discuss work of Levin and V’yugin on collections of sequences that are non-negligible in the sense that they can be computed by a probabilistic algorithm with positive probability. More precisely, Levin and V’yugin introduced an ordering on collections of sequences that are closed under Turing equivalence. Roughly speaking, given two such collections
$\mathcal {A}$
and
$\mathcal {B}$
,
$\mathcal {A}$
is below
$\mathcal {B}$
in this ordering if
$\mathcal {A}\setminus \mathcal {B}$
is negligible. The degree structure associated with this ordering, the Levin–V’yugin degrees (or
$\mathrm {LV}$
-degrees), can be shown to be a Boolean algebra, and in fact a measure algebra. We demonstrate the interactions of this work with recent results in computability theory and algorithmic randomness: First, we recall the definition of the Levin–V’yugin algebra and identify connections between its properties and classical properties from computability theory. In particular, we apply results on the interactions between notions of randomness and Turing reducibility to establish new facts about specific LV-degrees, such as the LV-degree of the collection of 1-generic sequences, that of the collection of sequences of hyperimmune degree, and those collections corresponding to various notions of effective randomness. Next, we provide a detailed explanation of a complex technique developed by V’yugin that allows the construction of semi-measures into which computability-theoretic properties can be encoded. We provide two examples of the use of this technique by explicating a result of V’yugin’s about the LV-degree of the collection of Martin-Löf random sequences and extending the result to the LV-degree of the collection of sequences of DNC degree.
We study from the proof complexity perspective the (informal) proof search problem (cf. [17, Sections 1.5 and 21.5]):
•Is there an optimal way to search for propositional proofs?
We note that, as a consequence of Levin’s universal search, for any fixed proof system there exists a time-optimal proof search algorithm. Using classical proof complexity results about reflection principles we prove that a time-optimal proof search algorithm exists without restricting proof systems iff a p-optimal proof system exists.
To characterize precisely the time proof search algorithms need for individual formulas we introduce a new proof complexity measure based on algorithmic information concepts. In particular, to a proof system P we attach information-efficiency function
$i_P(\tau )$
assigning to a tautology a natural number, and we show that:
•
$i_P(\tau )$
characterizes time any P-proof search algorithm has to use on
$\tau $
,
• for a fixed P there is such an information-optimal algorithm (informally: it finds proofs of minimal information content),
• a proof system is information-efficiency optimal (its information-efficiency function is minimal up to a multiplicative constant) iff it is p-optimal,
• for non-automatizable systems P there are formulas
$\tau $
with short proofs but having large information measure
$i_P(\tau )$
.
We isolate and motivate the problem to establish unconditional super-logarithmic lower bounds for
$i_P(\tau )$
where no super-polynomial size lower bounds are known. We also point out connections of the new measure with some topics in proof complexity other than proof search.
In this paper we analyse the limiting conditional distribution (Yaglom limit) for stochastic fluid models (SFMs), a key class of models in the theory of matrix-analytic methods. So far, only transient and stationary analyses of SFMs have been considered in the literature. The limiting conditional distribution gives useful insights into what happens when the process has been evolving for a long time, given that its busy period has not ended yet. We derive expressions for the Yaglom limit in terms of the singularity˜$s^*$ such that the key matrix of the SFM, ${\boldsymbol{\Psi}}(s)$, is finite (exists) for all $s\geq s^*$ and infinite for $s<s^*$. We show the uniqueness of the Yaglom limit and illustrate the application of the theory with simple examples.
We obtain a polynomial upper bound on the mixing time
$T_{CHR}(\epsilon)$
of the coordinate Hit-and-Run (CHR) random walk on an
$n-$
dimensional convex body, where
$T_{CHR}(\epsilon)$
is the number of steps needed to reach within
$\epsilon$
of the uniform distribution with respect to the total variation distance, starting from a warm start (i.e., a distribution which has a density with respect to the uniform distribution on the convex body that is bounded above by a constant). Our upper bound is polynomial in n, R and
$\frac{1}{\epsilon}$
, where we assume that the convex body contains the unit
$\Vert\cdot\Vert_\infty$
-unit ball
$B_\infty$
and is contained in its R-dilation
$R\cdot B_\infty$
. Whether CHR has a polynomial mixing time has been an open question.
Fix an abelian group
$\Gamma $
and an injective endomorphism
$F\colon \Gamma \to \Gamma $
. Improving on the results of [2], new characterizations are here obtained for the existence of spanning sets, F-automaticity, and F-sparsity. The model theoretic status of these sets is also investigated, culminating with a combinatorial description of the F-sparse sets that are stable in
$(\Gamma ,+)$
, and a proof that the expansion of
$(\Gamma ,+)$
by any F-sparse set is NIP. These methods are also used to show for prime
$p\ge 7$
that the expansion of
$(\mathbb {F}_p[t],+)$
by multiplication restricted to
$t^{\mathbb {N}}$
is NIP.
We apply the power-of-two-choices paradigm to a random walk on a graph: rather than moving to a uniform random neighbour at each step, a controller is allowed to choose from two independent uniform random neighbours. We prove that this allows the controller to significantly accelerate the hitting and cover times in several natural graph classes. In particular, we show that the cover time becomes linear in the number n of vertices on discrete tori and bounded degree trees, of order $${\mathcal O}(n\log \log n)$$ on bounded degree expanders, and of order $${\mathcal O}(n{(\log \log n)^2})$$ on the Erdős–Rényi random graph in a certain sparsely connected regime. We also consider the algorithmic question of computing an optimal strategy and prove a dichotomy in efficiency between computing strategies for hitting and cover times.
We prove that most permutations of degree $n$ have some power which is a cycle of prime length approximately $\log n$. Explicitly, we show that for $n$ sufficiently large, the proportion of such elements is at least $1-5/\log \log n$ with the prime between $\log n$ and $(\log n)^{\log \log n}$. The proportion of even permutations with this property is at least $1-7/\log \log n$.
We extend work of Berdinsky and Khoussainov [‘Cayley automatic representations of wreath products’, International Journal of Foundations of Computer Science27(2) (2016), 147–159] to show that being Cayley automatic is closed under taking the restricted wreath product with a virtually infinite cyclic group. This adds to the list of known examples of Cayley automatic groups.
We present a polynomial-time Markov chain Monte Carlo algorithm for estimating the partition function of the antiferromagnetic Ising model on any line graph. The analysis of the algorithm exploits the ‘winding’ technology devised by McQuillan [CoRR abs/1301.2880 (2013)] and developed by Huang, Lu and Zhang [Proc. 27th Symp. on Disc. Algorithms (SODA16), 514–527]. We show that exact computation of the partition function is #P-hard, even for line graphs, indicating that an approximation algorithm is the best that can be expected. We also show that Glauber dynamics for the Ising model is rapidly mixing on line graphs, an example being the kagome lattice.
As a new type of epistemic logics, the logics of knowing how capture the high-level epistemic reasoning about the knowledge of various plans to achieve certain goals. Existing work on these logics focuses on axiomatizations; this paper makes the first study of their model theoretical properties. It does so by introducing suitable notions of bisimulation for a family of five knowing how logics based on different notions of plans. As an application, we study and compare the expressive power of these logics.
Shallit and Wang showed that the automatic complexity
$A(x)$
satisfies
$A(x)\ge n/13$
for almost all
$x\in {\{\mathtt {0},\mathtt {1}\}}^n$
. They also stated that Holger Petersen had informed them that the constant
$13$
can be reduced to
$7$
. Here we show that it can be reduced to
$2+\epsilon $
for any
$\epsilon>0$
. The result also applies to nondeterministic automatic complexity
$A_N(x)$
. In that setting the result is tight inasmuch as
$A_N(x)\le n/2+1$
for all x.
We study a natural model of a random $2$-dimensional cubical complex which is a subcomplex of an n-dimensional cube, and where every possible square $2$-face is included independently with probability p. Our main result exhibits a sharp threshold $p=1/2$ for homology vanishing as $n \to \infty $. This is a $2$-dimensional analogue of the Burtin and Erdoős–Spencer theorems characterising the connectivity threshold for random graphs on the $1$-skeleton of the n-dimensional cube.
Our main result can also be seen as a cubical counterpart to the Linial–Meshulam theorem for random $2$-dimensional simplicial complexes. However, the models exhibit strikingly different behaviours. We show that if $p> 1 - \sqrt {1/2} \approx 0.2929$, then with high probability the fundamental group is a free group with one generator for every maximal $1$-dimensional face. As a corollary, homology vanishing and simple connectivity have the same threshold, even in the strong ‘hitting time’ sense. This is in contrast with the simplicial case, where the thresholds are far apart. The proof depends on an iterative algorithm for contracting cycles – we show that with high probability, the algorithm rapidly and dramatically simplifies the fundamental group, converging after only a few steps.
A theorem of Brudno says that the Kolmogorov–Sinai entropy of an ergodic subshift over
$\mathbb {N}$
equals the asymptotic Kolmogorov complexity of almost every word in the subshift. The purpose of this paper is to extend this result to subshifts over computable groups that admit computable regular symmetric Følner monotilings, which we introduce in this work. For every
$d \in \mathbb {N}$
, the groups
$\mathbb {Z}^d$
and
$\mathsf{UT}_{d+1}(\mathbb {Z})$
admit computable regular symmetric Følner monotilings for which the required computing algorithms are provided.