We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper is meant to be a survey about implicit characterizations of complexity classes by fragments of higher-order programming languages, with a special focus on type systems and subsystems of linear logic. Particular emphasis will be put on Martin Hofmann’s contributions to the subject, which very much helped in shaping the field.
This paper explores relational syllogistic logics, a family of logical systems related to reasoning about relations in extensions of the classical syllogistic. These are all decidable logical systems. We prove completeness theorems and complexity results for a natural subfamily of relational syllogistic logics, parametrized by constructors for terms and for sentences.
In this paper we investigate the computational complexity of deciding if the variety generated by a given finite idempotent algebra satisfies a special type of Maltsev condition that can be specified using a certain kind of finite labelled path. This class of Maltsev conditions includes several well known conditions, such as congruence permutability and having a sequence of n Jónsson terms, for some given n. We show that for such “path defined” Maltsev conditions, the decision problem is polynomial-time solvable.
This paper investigates the computational complexity of deciding if a given finite idempotent algebra has a ternary term operation $m$ that satisfies the minority equations $m(y,x,x)\approx m(x,y,x)\approx m(x,x,y)\approx y$. We show that a common polynomial-time approach to testing for this type of condition will not work in this case and that this decision problem lies in the class NP.
The computation of gamblets is accelerated by localizing their computation in a hierarchical manner (using a hierarchy of distances), and the approximation errors caused by these localization steps are bounded based on three properties: nesting, the well-conditioned nature of the linear systems solved in the Gamblet Transform, and the exponential decay of the gamblets. These efficiently computed, accurate, and localized gamblets are shown to produce a Fast Gamblet Transform of near-linear complexity. Application to the three primary classes of measurement functions in Sobolev spaces are developed.
This chapter is logical in character. The focus is on the logical properties of one particular generic structure: the generic omega-sequence. I take the perspective that is internal to arithmetic, from which arithmetic investigates \emph{one} structure.
This article presents a proof that Buss’s $S_2^2$ can prove the consistency of a fragment of Cook and Urquhart’s PV from which induction has been removed but substitution has been retained. This result improves Beckmann’s result, which proves the consistency of such a system without substitution in bounded arithmetic $S_2^1$.
Our proof relies on the notion of “computation” of the terms of PV. In our work, we first prove that, in the system under consideration, if an equation is proved and either its left- or right-hand side is computed, then there is a corresponding computation for its right- or left-hand side, respectively. By carefully computing the bound of the size of the computation, the proof of this theorem inside a bounded arithmetic is obtained, from which the consistency of the system is readily proven.
This result apparently implies the separation of bounded arithmetic because Buss and Ignjatović stated that it is not possible to prove the consistency of a fragment of PV without induction but with substitution in Buss’s $S_2^1$. However, their proof actually shows that it is not possible to prove the consistency of the system, which is obtained by the addition of propositional logic and other axioms to a system such as ours. On the other hand, the system that we have considered is strictly equational, which is a property on which our proof relies.
The paper introduces a graph theory variation of the general position problem: given a graph $G$, determine a largest set $S$ of vertices of $G$ such that no three vertices of $S$ lie on a common geodesic. Such a set is a max-gp-set of $G$ and its size is the gp-number $\text{gp}(G)$ of $G$. Upper bounds on $\text{gp}(G)$ in terms of different isometric covers are given and used to determine the gp-number of several classes of graphs. Connections between general position sets and packings are investigated and used to give lower bounds on the gp-number. It is also proved that the general position problem is NP-complete.
In this paper we consider two natural notions of connectivity for hypergraphs: weak and strong. We prove that the strong vertex connectivity of a connected hypergraph is bounded by its weak edge connectivity, thereby extending a theorem of Whitney from graphs to hypergraphs. We find that, while determining a minimum weak vertex cut can be done in polynomial time and is equivalent to finding a minimum vertex cut in the 2-section of the hypergraph in question, determining a minimum strong vertex cut is NP-hard for general hypergraphs. Moreover, the problem of finding minimum strong vertex cuts remains NP-hard when restricted to hypergraphs with maximum edge size at most 3. We also discuss the relationship between strong vertex connectivity and the minimum transversal problem for hypergraphs, showing that there are classes of hypergraphs for which one of the problems is NP-hard, while the other can be solved in polynomial time.
Fix a finite semigroup $S$ and let $a_{1},\ldots ,a_{k},b$ be tuples in a direct power $S^{n}$. The subpower membership problem (SMP) for $S$ asks whether $b$ can be generated by $a_{1},\ldots ,a_{k}$. For combinatorial Rees matrix semigroups we establish a dichotomy result: if the corresponding matrix is of a certain form, then the SMP is in P; otherwise it is NP-complete. For combinatorial Rees matrix semigroups with adjoined identity, we obtain a trichotomy: the SMP is either in P, NP-complete, or PSPACE-complete. This result yields various semigroups with PSPACE-complete SMP including the six-element Brandt monoid, the full transformation semigroup on three or more letters, and semigroups of all $n$ by $n$ matrices over a field for $n\geq 2$.
Among the myriad of desirable properties discussed in the context of forgetting in Answer Set Programming, strong persistence naturally captures its essence. Recently, it has been shown that it is not always possible to forget a set of atoms from a program while obeying this property, and a precise criterion regarding what can be forgotten has been presented, accompanied by a class of forgetting operators that return the correct result when forgetting is possible. However, it is an open question what to do when we have to forget a set of atoms, but cannot without violating this property. In this paper, we address this issue and investigate three natural alternatives to forget when forgetting without violating strong persistence is not possible, which turn out to correspond to the different possible relaxations of the characterization of strong persistence. Additionally, we discuss their preferable usage, shed light on the relation between forgetting and notions of relativized equivalence established earlier in the context of Answer Set Programming, and present a detailed study on their computational complexity.
Pointer analysis is a fundamental static program analysis for computing the set of objects that an expression can refer to. Decades of research has gone into developing methods of varying precision and efficiency for pointer analysis for programs that use different language features, but determining precisely how efficient a particular method is has been a challenge in itself.
For programs that use different language features, we consider methods for pointer analysis using Datalog and extensions to Datalog. When the rules are in Datalog, we present the calculation of precise time complexities from the rules using a new algorithm for decomposing rules for obtaining the best complexities. When extensions such as function symbols and universal quantification are used, we describe algorithms for efficiently implementing the extensions and the complexities of the algorithms.
We propose to strengthen Popper’s notion of falsifiability by adding the requirement that when an observation is inconsistent with a theory, there must be a ‘short proof’ of this inconsistency. We model the concept of a short proof using tools from computational complexity, and provide some examples of economic theories that are falsifiable in the usual sense but not with this additional requirement. We consider several variants of the definition of ‘short proof’ and several assumptions about the difficulty of computation, and study their different implications on the falsifiability of theories.
Presburger arithmetic is the first-order theory of the natural numbers with addition (but no multiplication). We characterize sets that can be defined by a Presburger formula as exactly the sets whose characteristic functions can be represented by rational generating functions; a geometric characterization of such sets is also given. In addition, if p = (p1, . . . , pn) are a subset of the free variables in a Presburger formula, we can define a counting function g(p) to be the number of solutions to the formula, for a given p. We show that every counting function obtained in this way may be represented as, equivalently, either a piecewise quasi-polynomial or a rational generating function. Finally, we translate known computational complexity results into this setting and discuss open directions.
X-ray pulsar navigation is a promising technology for autonomous spacecraft navigation. The key measurement of pulsar navigation is the time delay (phase delay). There are various methods to estimate phase delay, but most of them have high computational complexities. In this paper, a new method for phase delay estimation is proposed, which is based on the time-shift property of Discrete Fourier Transformation (DFT). With this method, the time complexity can be greatly reduced. Also, a delta-function approximation can be used to further decrease the computational cost. Numerical simulation shows that the proposed method is effective for phase delay estimation, and the reduced complexity makes our method more suitable for on board implementation.
We answer the following question posed by Lechuga: given a simply connected space X with both H* (X; ℚ) and π*(X) ⊗ ℚ being finite dimensional, what is the computational complexity of an algorithm computing the cup length and the rational Lusternik—Schnirelmann category of X?
Basically, by a reduction from the decision problem of whether a given graph is k-colourable for k ≥ 3, we show that even stricter versions of the problems above are NP-hard.
The new High Efficiency Video Coding (HEVC) standard achieves higher encoding efficiency when compared to its predecessors such as H.264/AVC. One of the factors responsible for this improvement is the new intra prediction method, which introduces a larger number of prediction directions resulting in an enhanced rate-distortion (RD) performance obtained at the cost of higher computational complexity. This paper proposes an algorithm to accelerate the intra mode decision, reducing the complexity of intra coding. The acceleration procedure takes into account the texture local directionality information and explores the correlation of intra modes across levels of the hierarchical tree structure used in HEVC. Experimental results show that the proposed algorithm provides a decrease of 39.22 and 43.88% in the HEVC intra prediction processing time on average, for all-intra high efficiency (AI-HE) and low complexity (AI-LC) configurations, respectively, with a small degradation in encoding efficiency (BD-PSNR loss of 0.1 dB for AI-HE and 0.8 dB for AI-LC on average).
The virtual decomposition control (VDC) is an efficient tool suitable to deal with the full-dynamics-based control problem of complex robots. However, the regressor-based adaptive control used by VDC to control every subsystem and to estimate the unknown parameters demands specific knowledge about the system physics. Therefore, in this paper, we focus on reorganizing the equation of the VDC for a serial chain manipulator using the adaptive function approximation technique (FAT) without needing specific system physics. The dynamic matrices of the dynamic equation of every subsystem (e.g. link and joint) are approximated by orthogonal functions due to the minimum approximation errors produced. The control, the virtual stability of every subsystem and the stability of the entire robotic system are proved in this work. Then the computational complexity of the FAT is compared with the regressor-based approach. Despite the apparent advantage of the FAT in avoiding the regressor matrix, its computational complexity can result in difficulties in the implementation because of the representation of the dynamic matrices of the link subsystem by two large sparse matrices. In effect, the FAT-based adaptive VDC requires further work for improving the representation of the dynamic matrices of the target subsystem. Two case studies are simulated by Matlab/Simulink: a 2-R manipulator and a 6-DOF planar biped robot for verification purposes.
We discuss how much space is sufficient to decide whether a unary given number
n is a prime. We show that
O(log log n) space is sufficient for a deterministic
Turing machine, if it is equipped with an additional pebble movable along the input tape,
and also for an alternating machine, if the space restriction applies only to its
accepting computation subtrees. In other words, the language is a prime is in
pebble–DSPACE(log log n) and also in
accept–ASPACE(log log n). Moreover, if the given
n is composite, such machines are able to find a divisor of
n. Since O(log log n) space is too
small to write down a divisor, which might require
Ω(log n) bits, the witness divisor is indicated by the
input head position at the moment when the machine halts.
We introduce a type of isomorphism among strategic games that we call local
isomorphism. Local isomorphisms is a weaker version of the notions of strong
and weak game isomorphism introduced in [J. Gabarro, A. Garcia and M. Serna,
Theor. Comput. Sci. 412 (2011) 6675–6695]. In a local
isomorphism it is required to preserve, for any player, the player’s preferences on the
sets of strategy profiles that differ only in the action selected by this player. We show
that the game isomorphism problem for local isomorphism is equivalent to the same problem
for strong or weak isomorphism for strategic games given in: general, extensive and
formula general form. As a consequence of the results in [J. Gabarro, A. Garcia and M.
Serna, Theor. Comput. Sci. 412 (2011) 6675–6695] this
implies that local isomorphism problem for strategic games is equivalent to (a) the
circuit isomorphism problem for games given in general form, (b) the boolean formula
isomorphism problem for formula games in general form, and (c) the graph isomorphism
problem for games given in explicit form.