To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter reviews the literature on social intelligence (SI) as it has evolved over the century since Thorndike (1920) popularized the concept. Most research on SI has been guided by an ability view, and an analogy to IQ, as exemplified by the George Washington University Social Intelligence Test, and the “behavioral” contents in Guilford’s Structure of Intellect. The assessment of SI is important for the assessment of intellectual disability (mental retardation) and the autistic spectrum, but raises the question of whether SI is a qualitatively different form of intelligence, or simply general intelligence applied in social situations. The chapter proposes an alternative knowledge view of SI as the fund of declarative and procedural knowledge which the individual brings to bear on social interactions, especially in the pursuit of important life tasks.
The ability to assess the intelligence of other species has been constrained because it is not always easy to communicate to other species what we require of them. Furthermore, we tend to define the tasks with procedures designed for us rather than for the species in question. The appropriate assessment of animal intelligence is important, however, because it has demonstrated that although the human capacity for intelligent behavior quantitatively surpasses that of other animals, qualitatively, it is not as different as we generally believe. Furthermore, the intelligent behavior of other species demonstrates that although language and culture contribute to human intelligence, they are clearly not necessary. Finally, although we attribute certain human behavior such as unskilled gambling and cognitive dissonance to our complex social environment, the fact that other species show very similar suboptimal behavior suggests that simpler underlying processes likely are responsible for those behaviors.
This chapter discusses and reviews research on the relationship between two closely aligned concepts: intelligence and reasoning. We begin by defining reasoning in a general sense. Next, we review prominent theories and models of intelligence and reasoning in both the psychometric and cognitive psychological traditions, highlighting how the two constructs are both intertwined yet nonetheless conceptually discriminable. We follow by discussing issues involved in validly measuring reasoning, touching on considerations, concerns, and evidence informed by the cognitive and psychometric perspectives. Then, we review the relationship between reasoning and allied constructs and domains, including expertise, practical outcomes (e.g., educational and workplace achievement), working memory, and critical thinking. We conclude by sketching multiple avenues for future research.
Cumulative technological culture refers to the increase in the efficiency and complexity of tools and techniques in human populations over generations. A fascinating question is to understand the cognitive origins of this phenomenon. Because cumulative technological culture is definitely a social phenomenon, most accounts have suggested a series of cognitive mechanisms oriented toward the social dimension (e.g., teaching, imitation, theory of mind, metacognition), thereby minimizing the technical dimension and the potential influence of non-social, cognitive skills. What if we have failed to see the elephant in the room? What if social cognitive mechanisms were only catalyzing factors and not the sufficient and necessary conditions for the emergence of cumulative technological culture? In this article, we offer an alternative, unified cognitive approach to this phenomenon by assuming that cumulative technological culture originates in non-social cognitive skills, namely technical-reasoning skills which enable humans to develop the technical potential necessary to constantly acquire and improve technical information. This leads us to discuss how theory of mind and metacognition, in concert with technical reasoning, can help boost cumulative technological culture. The cognitive approach developed here opens up promising new avenues for reinterpreting classical issues (e.g., innovation, emulation versus imitation, social versus asocial learning, cooperation, teaching, overimitation) in a field that has so far been largely dominated by other disciplines, such as evolutionary biology, mathematics, anthropology, archaeology, economics, and philosophy.
Canonical models of costly signaling in international relations (IR) tend to assume costly signals speak for themselves: a signal's costliness is typically understood to be a function of the signal, not the perceptions of the recipient. Integrating the study of signaling in IR with research on motivated skepticism and asymmetric updating from political psychology, we show that individuals’ tendencies to embrace information consistent with their overarching belief systems (and dismiss information inconsistent with it) has important implications for how signals are interpreted. We test our theory in the context of the 2015 Joint Comprehensive Plan of Action (JCPOA) on Iran, combining two survey experiments fielded on members of the American mass public. We find patterns consistent with motivated skepticism: the individuals most likely to update their beliefs are those who need reassurance the least, such that costly signals cause polarization rather than convergence. Successful signaling therefore requires knowing something about the orientations of the signal's recipient.
Population ethics is widely considered to be exceptionally important and exceptionally difficult. One key source of difficulty is the conflict between certain moral intuitions and analytical results identifying requirements for rational (in the sense of complete and transitive) social choice over possible populations. One prominent such intuition is the Asymmetry, which jointly proposes that the fact that a possible child’s quality of life would be bad is a normative reason not to create the child, but the fact that a child’s quality of life would be good is not a reason to create the child. This paper reports a set of questionnaire experiments about the Asymmetry in the spirit of economists’ empirical social choice. Few survey respondents show support for the Asymmetry; instead respondents report that expectations of a good quality of life are relevant. Each experiment shows evidence (among at least some participants) of dual-process moral reasoning, in which cognitive reflection is statistically associated with reporting expected good quality of life to be normatively relevant. The paper discusses possible implications of these results for the economics of population-sensitive social welfare and for the conflict between moral mathematics and population intuition.
The inherent difficulty of knowledge specification and the lack of trained specialists are some of the key obstacles on the way to making intelligent systems based on the knowledge representation and reasoning (KRR) paradigm commonplace. Knowledge and query authoring using natural language, especially controlled natural language (CNL), is one of the promising approaches that could enable domain experts, who are not trained logicians, to both create formal knowledge and query it. In previous work, we introduced the KALM system (Knowledge Authoring Logic Machine) that supports knowledge authoring (and simple querying) with very high accuracy that at present is unachievable via machine learning approaches. The present paper expands on the question answering aspect of KALM and introduces KALM-QA (KALM for Question Answering) that is capable of answering much more complex English questions. We show that KALM-QA achieves 100% accuracy on an extensive suite of movie-related questions, called MetaQA, which contains almost 29,000 test questions and over 260,000 training questions. We contrast this with a published machine learning approach, which falls far short of this high mark.
To be responsive to dynamically changing real-world environments, an intelligent agent needs to perform complex sequential decision-making tasks that are often guided by commonsense knowledge. The previous work on this line of research led to the framework called interleaved commonsense reasoning and probabilistic planning (icorpp), which used P-log for representing commmonsense knowledge and Markov Decision Processes (MDPs) or Partially Observable MDPs (POMDPs) for planning under uncertainty. A main limitation of icorpp is that its implementation requires non-trivial engineering efforts to bridge the commonsense reasoning and probabilistic planning formalisms. In this paper, we present a unified framework to integrate icorpp’s reasoning and planning components. In particular, we extend probabilistic action language pBC+ to express utility, belief states, and observation as in POMDP models. Inheriting the advantages of action languages, the new action language provides an elaboration tolerant representation of POMDP that reflects commonsense knowledge. The idea led to the design of the system pbcplus2pomdp, which compiles a pBC+ action description into a POMDP model that can be directly processed by off-the-shelf POMDP solvers to compute an optimal policy of the pBC+ action description. Our experiments show that it retains the advantages of icorpp while avoiding the manual efforts in bridging the commonsense reasoner and the probabilistic planner.
A common feature in Answer Set Programming is the use of a second negation, stronger than default negation and sometimes called explicit, strong or classical negation. This explicit negation is normally used in front of atoms, rather than allowing its use as a regular operator. In this paper we consider the arbitrary combination of explicit negation with nested expressions, as those defined by Lifschitz, Tang and Turner. We extend the concept of reduct for this new syntax and then prove that it can be captured by an extension of Equilibrium Logic with this second negation. We study some properties of this variant and compare to the already known combination of Equilibrium Logic with Nelson’s strong negation.
Stream reasoning systems are designed for complex decision-making from possibly infinite, dynamic streams of data. Modern approaches to stream reasoning are usually performing their computations using stand-alone solvers, which incrementally update their internal state and return results as the new portions of data streams are pushed. However, the performance of such approaches degrades quickly as the rates of the input data and the complexity of decision problems are growing. This problem was already recognized in the area of stream processing, where systems became distributed in order to allocate vast computing resources provided by clouds. In this paper we propose a distributed approach to stream reasoning that can efficiently split computations among different solvers communicating their results over data streams. Moreover, in order to increase the throughput of the distributed system, we suggest an interval-based semantics for the LARS language, which enables significant reductions of network traffic. Performed evaluations indicate that the distributed stream reasoning significantly outperforms existing stand-alone LARS solvers when the complexity of decision problems and the rate of incoming data are increasing.
Repeated executions of reasoning tasks for varying inputs are necessary in many applicative settings, such as stream reasoning. In this context, we propose an incremental grounding approach for the answer set semantics. We focus on the possibility of generating incrementally larger ground logic programs equivalent to a given non-ground one; so called overgrounded programs can be reused in combination with deliberately many different sets of inputs. Updating overgrounded programs requires a small effort, thus making the instantiation of logic programs considerably faster when grounding is repeated on a series of inputs similar to each other. Notably, the proposed approach works “under the hood”, relieving designers of logic programs from controlling technical aspects of grounding engines and answer set systems. In this work we present the theoretical basis of the proposed incremental grounding technique, we illustrate the consequent repeated evaluation strategy and report about our experiments.
The Winograd Schema Challenge (WSC) is a natural language understanding task proposed as an alternative to the Turing test in 2011. In this work we attempt to solve WSC problems by reasoning with additional knowledge. By using an approach built on top of graph-subgraph isomorphism encoded using Answer Set Programming (ASP) we were able to handle 240 out of 291 WSC problems. The ASP encoding allows us to add additional constraints in an elaboration tolerant manner. In the process we present a graph based representation of WSC problems as well as relevant commonsense knowledge.
Abstract solvers are a method to formally analyze algorithms that have been profitably used for describing, comparing and composing solving techniques in various fields such as Propositional Satisfiability (SAT), Quantified SAT, Satisfiability Modulo Theories, Answer Set Programming (ASP), and Constraint ASP.
In this paper, we design, implement and test novel abstract solutions for cautious reasoning tasks in ASP. We show how to improve the current abstract solvers for cautious reasoning in ASP with new techniques borrowed from backbone computation in SAT, in order to design new solving algorithms. By doing so, we also formally show that the algorithms for solving cautious reasoning tasks in ASP are strongly related to those for computing backbones of Boolean formulas. We implement some of the new solutions in the ASP solver wasp and show that their performance are comparable to state-of-the-art solutions on the benchmark problems from the past ASP Competitions.
Magic sets are a Datalog to Datalog rewriting technique to optimize query answering. The rewritten program focuses on a portion of the stable model(s) of the input program which is sufficient to answer the given query. However, the rewriting may introduce new recursive definitions, which can involve even negation and aggregations, and may slow down program evaluation. This paper enhances the magic set technique by preventing the creation of (new) recursive definitions in the rewritten program. It turns out that the new version of magic sets is closed for Datalog programs with stratified negation and aggregations, which is very convenient to obtain efficient computation of the stable model of the rewritten program. Moreover, the rewritten program is further optimized by the elimination of subsumed rules and by the efficient handling of the cases where binding propagation is lost. The research was stimulated by a challenge on the exploitation of Datalog/dlv for efficient reasoning on large ontologies. All proposed techniques have been hence implemented in the dlv system, and tested for ontological reasoning, confirming their effectiveness.
Epistemic Logic Programs (ELPs) extend Answer Set Programming (ASP) with epistemic negation and have received renewed interest in recent years. This led to the development of new research and efficient solving systems for ELPs. In practice, ELPs are often written in a modular way, where each module interacts with other modules by accepting sets of facts as input, and passing on sets of facts as output. An interesting question then presents itself: under which conditions can such a module be replaced by another one without changing the outcome, for any set of input facts? This problem is known as uniform equivalence, and has been studied extensively for ASP. For ELPs, however, such an investigation is, as of yet, missing. In this paper, we therefore propose a characterization of uniform equivalence that can be directly applied to the language of state-of-the-art ELP solvers. We also investigate the computational complexity of deciding uniform equivalence for two ELPs, and show that it is on the third level of the polynomial hierarchy.
Answer Set Programming (ASP) is a well-established formalism for logic programming. Problem solving in ASP requires to write an ASP program whose answers sets correspond to solutions. Albeit the non-existence of answer sets for some ASP programs can be considered as a modeling feature, it turns out to be a weakness in many other cases, and especially for query answering. Paracoherent answer set semantics extend the classical semantics of ASP to draw meaningful conclusions also from incoherent programs, with the result of increasing the range of applications of ASP. State of the art implementations of paracoherent ASP adopt the semi-equilibrium semantics, but cannot be lifted straightforwardly to compute efficiently the (better) split semi-equilibrium semantics that discards undesirable semi-equilibrium models. In this paper an efficient evaluation technique for computing a split semi-equilibrium model is presented. An experiment on hard benchmarks shows that better paracoherent answer sets can be computed consuming less computational resources than existing methods.
In the last years, abstract argumentation has met with great success in AI, since it has served to capture several non-monotonic logics for AI. Relations between argumentation framework (AF) semantics and logic programming ones are investigating more and more. In particular, great attention has been given to the well-known stable extensions of an AF, that are closely related to the answer sets of a logic program. However, if a framework admits a small incoherent part, no stable extension can be provided. To overcome this shortcoming, two semantics generalizing stable extensions have been studied, namely semi-stable and stage. In this paper, we show that another perspective is possible on incoherent AFs, called paracoherent extensions, as they have a counterpart in paracoherent answer set semantics. We compare this perspective with semi-stable and stage semantics, by showing that computational costs remain unchanged, and moreover an interesting symmetric behaviour is maintained.
Members of the public can disagree with scientists in at least two ways: people can reject well-established scientific theories and they can believe fabricated, deceptive claims about science to be true. Scholars examining the reasons for these disagreements find that some individuals are more likely than others to diverge from scientists because of individual factors such as their science literacy, political ideology, and religiosity. This study builds on this literature by examining the role of conspiracy mentality in these two phenomena. Participants were recruited from a national online panel (N = 513) and in person from the first annual Flat Earth International Conference (N = 21). We found that conspiracy mentality and science literacy both play important roles in believing viral and deceptive claims about science, but evidence for the importance of conspiracy mentality in the rejection of science is much more mixed.
Information retrieval (IR) aims at retrieving documents that are most relevant to a query provided by a user. Traditional techniques rely mostly on syntactic methods. In some cases, however, links at a deeper semantic level must be considered. In this paper, we explore a type of IR task in which documents describe sequences of events, and queries are about the state of the world after such events. In this context, successfully matching documents and query requires considering the events’ possibly implicit uncertain effects and side effects. We begin by analyzing the problem, then propose an action language-based formalization, and finally automate the corresponding IR task using answer set programming.