To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Attentional bias to threat has been implicated as a cognitive mechanism in anxiety disorders for youth. Yet, prior studies documenting this bias have largely relied on a method with questionable reliability (i.e. dot-probe task) and small samples, few of which included adolescents. The current study sought to address such limitations by examining relations between anxiety – both clinically diagnosed and dimensionally rated – and attentional bias to threat.
The study included a community sample of adolescents and employed eye-tracking methodology intended to capture possible biases across the full range of both automatic (i.e. vigilance bias) and controlled attentional processes (i.e. avoidance bias, maintenance bias). We examined both dimensional anxiety (across the full sample; n = 215) and categorical anxiety in a subset case-control analysis (n = 100) as predictors of biases.
Findings indicated that participants with an anxiety disorder oriented more slowly to angry faces than matched controls. Results did not suggest a greater likelihood of initial orienting to angry faces among our participants with anxiety disorders or those with higher dimensional ratings of anxiety. Greater anxiety severity was associated with greater dwell time to neutral faces.
This is the largest study to date examining eye-tracking metrics of attention to threat among healthy and anxious youth. Findings did not support the notion that anxiety is characterized by heightened vigilance or avoidance/maintenance of attention to threat. All effects detected were extremely small. Links between attention to threat and anxiety among adolescents may be subtle and highly dependent on experimental task dimensions.
Previous studies have shown that directing learners’ attention during perceptual training facilitates detection and learning of unfamiliar consonant categories. The current study asks whether this attentional directing can also facilitate other types of phonetic learning. Monolingual Mandarin speakers were divided into two groups directed to learn either (1) the consonants or (2) the tones in an identification training task with the same set of Southern Min monosyllabic words containing the consonants /pʰ, p, b, kh, k, ɡ, tɕʰ, tɕ, ɕ/ and the tones (55, 33, 22, 24, 41). All subjects were also tested with an AXB discrimination task (with a distinct set of Southern Min words) before and after the training. Unsurprisingly, both groups improved accuracy in the sound type to which they attended. However, the consonant-attending group did not improve in discriminating tones after training, and neither did the tone-attending group in discriminating consonants -- despite both groups having equal exposure to the same training stimuli. When combined with previous results for consonant and vowel training, these results suggest that explicitly directing learners’ attention has a broadly facilitative effect on phonetic learning including of tonal contrasts.
A recent genome-wide association study (GWAS) identified 12 independent loci significantly associated with attention-deficit/hyperactivity disorder (ADHD). Polygenic risk scores (PRS), derived from the GWAS, can be used to assess genetic overlap between ADHD and other traits. Using ADHD samples from several international sites, we derived PRS for ADHD from the recent GWAS to test whether genetic variants that contribute to ADHD also influence two cognitive functions that show strong association with ADHD: attention regulation and response inhibition, captured by reaction time variability (RTV) and commission errors (CE).
The discovery GWAS included 19 099 ADHD cases and 34 194 control participants. The combined target sample included 845 people with ADHD (age: 8–40 years). RTV and CE were available from reaction time and response inhibition tasks. ADHD PRS were calculated from the GWAS using a leave-one-study-out approach. Regression analyses were run to investigate whether ADHD PRS were associated with CE and RTV. Results across sites were combined via random effect meta-analyses.
When combining the studies in meta-analyses, results were significant for RTV (R2 = 0.011, β = 0.088, p = 0.02) but not for CE (R2 = 0.011, β = 0.013, p = 0.732). No significant association was found between ADHD PRS and RTV or CE in any sample individually (p > 0.10).
We detected a significant association between PRS for ADHD and RTV (but not CE) in individuals with ADHD, suggesting that common genetic risk variants for ADHD influence attention regulation.
The aim of this study was to identify factors associated with acceptability and efficacy of yoga training (YT) for improving cognitive dysfunction in individuals with schizophrenia (SZ).
We analysed data from two published clinical trials of YT for cognitive dysfunction among Indians with SZ: (1) a 21-day randomised controlled trial (RCT, N = 286), 3 and 6 months follow-up and (2) a 21-day open trial (n = 62). Multivariate analyses were conducted to examine the association of baseline characteristics (age, sex, socio-economic status, educational status, duration, and severity of illness) with improvement in cognition (i.e. attention and face memory) following YT. Factors associated with acceptability were identified by comparing baseline demographic variables between screened and enrolled participants as well as completers versus non-completers.
Enrolled participants were younger than screened persons who declined participation (t = 2.952, p = 0.003). No other characteristics were associated with study enrollment or completion. Regarding efficacy, schooling duration was nominally associated with greater and sustained cognitive improvement on a measure of facial memory. No other baseline characteristics were associated with efficacy of YT in the open trial, the RCT, or the combined samples (n = 148).
YT is acceptable even among younger individuals with SZ. It also enhances specific cognitive functions, regardless of individual differences in selected psychosocial characteristics. Thus, yoga could be incorporated as adjunctive therapy for patients with SZ. Importantly, our results suggest cognitive dysfunction is remediable in persons with SZ across the age spectrum.
To allow identification of stimuli, sensory input is initially held briefly in sensory memory. It is then held in a short-term store (STS), where it can receive the additional processing required to form a permanent memory. The existence of separate short- and long-term stores is supported by research on amnesia, demonstrating that brain damage can affect one but not the other. Forgetting in STS may be caused by decay, and by interference from other memories. STS can hold information retrieved from long-term memory when required for activities such as reading; to reflect this, it is now called working memory. Baddeley proposed that working memory has 3 components: the phonological loop, visuo-spatial sketch pad, and central executive. Consolidation theory suggests that the formation of a permanent memory requires time for the strengthening of synaptic connections; there also appears to be a consolidation process that can occur over years. We cannot attend to all the stimuli that seek entry into working memory; change blindness provides a striking example. Some theories suggest that selection occurs early in processing, others that attention can be allocated flexibly after stimuli have been identified. With practice, processing can become automatic, so that stimuli no longer require attention.
Working memory and perceptual attention are related functions, engaging many similar mechanisms and brain regions. As a consequence, behavioral and neural measures often reveal competition between working memory and attention demands. Yet there remains widespread debate about how working memory operates, and whether it truly shares processes and representations with attention. This Element will examine local-level representational properties to illuminate the storage format of working memory content, as well as systems-level and brain network communication properties to illuminate the attentional processes that control working memory. The Element will integrate both cognitive and neuroscientific accounts, describing shared substrates for working memory and perceptual attention, in a multi-level network architecture that provides robustness to disruptions and allows flexible attentional control in line with goals.
Boye and Harder (2012) claim that the grammatical–lexical distinction has to do with discourse prominence: lexical elements can convey discursively primary (or foreground) information, whereas grammatical elements cannot (outside corrective contexts). This paper reports two experiments that test this claim. Experiment 1 was a letter detection study, in which readers were instructed to mark specific letters in the text. Experiment 2 was a text-change study, in which participants were asked to register omitted words. Experiment 2 showed a main effect of word category: readers attend more to words in lexical elements (e.g., full verbs) than to those in grammatical elements (e.g., auxiliaries). Experiment 1 showed an interaction: attention to letters in focused constituents increased more for grammatical words than for lexical words. The results suggest that the lexical–grammatical contrast does indeed guide readers’ attention to words.
Identifying developmental endophenotypes on the pathway between genetics and behavior is critical to uncovering the mechanisms underlying neurodevelopmental conditions. In this proof-of-principle study, we explored whether early disruptions in visual attention are a unique or shared candidate endophenotype of autism spectrum disorder (ASD) and attention-deficit/hyperactivity disorder (ADHD). We calculated the duration of the longest look (i.e., peak look) to faces in an array-based eye-tracking task for 335 14-month-old infants with and without first-degree relatives with ASD and/or ADHD. We leveraged parent-report and genotype data available for a proportion of these infants to evaluate the relation of looking behavior to familial (n = 285) and genetic liability (using polygenic scores, n = 185) as well as ASD and ADHD-relevant temperament traits at 2 years of age (shyness and inhibitory control, respectively, n = 272) and ASD and ADHD clinical traits at 6 years of age (n = 94).
Results showed that longer peak looks at the face were associated with elevated polygenic scores for ADHD (β = 0.078, p = .023), but not ASD (β = 0.002, p = .944), and with elevated ADHD traits in mid-childhood (F(1,88) = 6.401, p = .013,
=0.068; ASD: F (1,88) = 3.218, p = .076), but not in toddlerhood (ps > 0.2). This pattern of results did not emerge when considering mean peak look duration across face and nonface stimuli. Thus, alterations in attention to faces during spontaneous visual exploration may be more consistent with a developmental endophenotype of ADHD than ASD. Our work shows that dissecting paths to neurodevelopmental conditions requires longitudinal data incorporating polygenic contribution, early neurocognitive function, and clinical phenotypic variation.
Cognitive science has often considered the impact of new technology on childhood development and the ability of digital devices to disrupt attention and cognitive processes. In contrast, the same area has successfully implemented smartphones into existing research practices, which perhaps reflects the methodological training many psychologists working within cognition and perception receive as part of their doctoral studies. For example, standard psychophysical experiments and reaction time tasks have been ported to a variety of smartphones using their built-in web-browser. This has been extended to include the large-scale gamification of traditional cognitive tests (Wilmer, Sherman and Chein, 2017). Combining advanced graphical abilities, a number of cognitive tasks have been validated to assess working memory, attention and decision-making abilities (Paletta, 2014).
This chapter points towards a future whereby cognitive psychology could become the first sub-discipline within psychology to develop a complete portable laboratory. This would, in turn, reveal any casual links between technology use and cognitive functioning which continues to allude existing research paradigms
Culture is composed of meanings (e.g., values, beliefs, and norms) and practices (e.g., conventions, scripts, and routines) that are shared, albeit unevenly, in a given community and group. Culture is integral to biological adaptation, not an overlay to the human mind but part and parcel of how the human mind functions. Since the mind is shaped through culture, it also contributes to the reproduction of culture. This chapter highlights a broad contrast thought to separate the West from the “rest,” with Westerners being more independent or less interdependent than non-Westerners, although non-Western regions themselves are highly variable, reflecting diverse adaptive strategies for achieving interdependence under varying socio-ecological conditions. We review existing behavioral and neuroscience evidence to support a broad distinction between the West and the non-West based on three core features of interdependence: predictors of happiness, holistic attention, and holistic social cognition. We also summarize recent evidence suggesting that culture influences cortical volume in specific brain regions. We conclude by pointing out that while cultural shaping of mentality is highly idiosyncratic at the individual level, it can nonetheless be systematic at the collective level, enabling faithful reproduction of the cultural system by which individuals have been trained and shaped.
Ecphrasis dramatizes a form of attention, the reflective gaze at an object. An ecphrasis also performs an interpretative process with which the reader is made complicit: the strategies of viewing comprehended by an ecphrasis are normative, even and especially when contested. When Marcel, Proust’s narrator in À la recherche du temps perdu, stands for almost three-quarters of an hour lost in admiration in front of paintings by Elstir, keeping his host and dinner guests waiting, we are invited by Proust’s prose not merely to imagine the entrancing paintings, but also to recognize and respect the aesthetic prowess and self-regard of the narrator – as well as to stand at some distance with the author from the narrator’s youthful fascination and social indiscretion. It is a passage that highlights aesthetic response as a function of modern social protocol, with Proust’s customary self-aware humour.1 How to stand in front of a picture, how long to look at it, what to look at, and, above all, in what language to articulate a response, are all expressive aspects of the cultural spectacle of ecphrastic performance, in antiquity as much as in fin-de-siècle Paris.2
The symptoms of functional neurological disorder (FND) are a product of its pathophysiology. The pathophysiology of FND is reflective of dysfunction within and across different brain circuits that, in turn, affects specific constructs. In this perspective article, we briefly review five constructs that are affected in FND: emotion processing (including salience), agency, attention, interoception, and predictive processing/inference. Examples of underlying neural circuits include salience, multimodal integration, and attention networks. The symptoms of each patient can be described as a combination of dysfunction in several of these networks and related processes. While we have gained a considerable understanding of FND, there is more work to be done, including determining how pathophysiological abnormalities arise as a consequence of etiologic biopsychosocial factors. To facilitate advances in this underserved and important area, we propose a pathophysiology-focused research agenda to engage government-sponsored funding agencies and foundations.
The assessment of cognitive functions such as prospective memory, episodic memory, attention, and executive functions benefits from an ecologically valid approach to better understand how performance outcomes generalize to everyday life. Immersive virtual reality (VR) is considered capable of simulating real-life situations to enhance ecological validity. The present study attempted to validate the Virtual Reality Everyday Assessment Lab (VR-EAL), an immersive VR neuropsychological battery, against an extensive paper-and-pencil neuropsychological battery.
Forty-one participants (21 females) were recruited: 18 gamers and 23 non-gamers who attended both an immersive VR and a paper-and-pencil testing session. Bayesian Pearson’s correlation analyses were conducted to assess construct and convergent validity of the VR-EAL. Bayesian t-tests were performed to compare VR and paper-and-pencil testing in terms of administration time, similarity to real-life tasks (i.e., ecological validity), and pleasantness.
VR-EAL scores were significantly correlated with their equivalent scores on the paper-and-pencil tests. The participants’ reports indicated that the VR-EAL tasks were significantly more ecologically valid and pleasant than the paper-and-pencil neuropsychological battery. The VR-EAL battery also had a shorter administration time.
The VR-EAL appears as an effective neuropsychological tool for the assessment of everyday cognitive functions, which has enhanced ecological validity, a highly pleasant testing experience, and does not induce cybersickness.
In this Element, a framework is proposed in which it is assumed that visual selection is the result of the interaction between top-down, bottom-up and selection-history factors. The Element discusses top-down attentional engagement and suppression, bottom-up selection by abrupt onsets and static singletons as well as lingering biases due to selection-history entailing priming, reward and statistical learning. We present an integrated framework in which biased competition among these three factors drives attention in a winner-take-all-fashion. We speculate which brain areas are likely to be involved and how signals representing these three factors feed into the priority map which ultimately determines selection.
Around two-thirds of patients with auditory hallucinations experience derogatory and threatening voices (DTVs). Understandably, when these voices are believed then common consequences can be depression, anxiety and suicidal ideation. There is a need for treatment targeted at promoting distance from such voice content. The first step in this treatment development is to understand why patients listen to and believe voices that are appraised as malevolent.
To learn from patients their reasons for listening to and believing DTVs.
Theoretical sampling was used to recruit 15 participants with non-affective psychosis from NHS services who heard daily DTVs. Data were obtained by semi-structured interviews and analysed using grounded theory.
Six higher-order categories for why patients listen and/or believe voices were theorised. These were: (i) to understand the voices (e.g. what is their motive?); (ii) to be alert to the threat (e.g. prepared for what might happen); (iii) a normal instinct to rely on sensory information; (iv) the voices can be of people they know; (v) the DTVs use strategies (e.g. repetition) to capture attention; and (vi) patients feel so worn down it is hard to resist the voice experience (e.g. too mentally defeated to dismiss comments). In total, 21 reasons were identified, with all participants endorsing multiple reasons.
The study generated a wide range of reasons why patients listen to and believe DTVs. Awareness of these reasons can help clinicians understand the patient experience and also identify targets in psychological intervention.
This chapter describes the behavior change technique of goal setting. Goal setting is an established and ubiquitous technique that has been used successfully in varied and diverse contexts, for multiple behaviors, and in numerous populations. Goal setting encompasses many different perspectives from individual-level goal setting (e.g., making a new year’s resolution or reading one book a week) to goal setting by global organizations (e.g., the United Nations’ sustainable development goals). This chapter considers many different kinds of goal setting interventions, including those that have emerged in popular culture and those derived from specific theories. Given that goal setting is ubiquitous, numerous theories have emerged to explain how and why goals operate, with Locke and Latham’s (1990) goal setting theory, the focus of the current chapter, as the only theory that deals with goal setting as a behavior change technique in its own right. Goal setting theory is described in detail and used to illustrate how different types of goal setting interventions might operate. The final section includes a step-by-step guide of what to do, what not to do, and what can be left to personal preference when setting goals.
Alcohol use disorder (AUD) is associated with cognitive deficits but little is known to what degree this is caused by genetically influenced traits, i.e. endophenotypes, present before the onset of the disorder. The aim of the current study was to investigate to what degree family history (FH) of AUD is associated with cognitive functions.
Case-control cross-sectional study at an outpatient addiction research clinic. Treatment-seeking AUD patients (n = 106) were compared to healthy controls (HC; n = 90), matched for age and sex. The HC group was further subdivided into AUD FH positive (FH+; n = 47) or negative (FH−; n = 39) based on the Family Tree Questionnaire. Participants underwent psychiatric and substance use assessments, completed the Barratt Impulsiveness Scale and performed a comprehensive battery of neuropsychological tests assessing response inhibition, decision making, attention, working memory, and emotional recognition.
Compared to HC, AUD patients exhibited elevated self-rated impulsivity (p < 0.001; d = 0.62), as well as significantly poorer response inhibition (p = 0.001; d = 0.51), attention (p = 0.021; d = 0.38) and information gathering in decision making (p = 0.073; d = 0.34). Similar to AUD patients, FH+ individuals exhibited elevated self-rated impulsivity (p = 0.096; d = 0.46), and in addition significantly worse future planning capacity (p < 0.001; d = 0.76) and prolonged emotional recognition response time (p = 0.010; d = 0.60) compared to FH−, while no other significant differences were found between FH+ and FH−.
Elevated impulsivity, poor performance in future planning and emotional processing speed may be potential cognitive endophenotypes in AUD. These cognitive domains represent putative targets for prevention strategies and treatment of AUD.
Infants born preterm miss out on the peak period of in utero DHA accretion to the brain during the last trimester of pregnancy which is hypothesised to contribute to the increased prevalence of neurodevelopmental deficits in this population. This study aimed to determine whether DHA supplementation in infants born preterm improves attention at 18 months’ corrected age. This is a follow-up of a subset of infants who participated in the N3RO randomised controlled trial. Infants were randomised to receive an enteral emulsion of high-dose DHA (60 mg/kg per d) or no DHA (soya oil – control) from within the first days of birth until 36 weeks’ post-menstrual age. The assessment of attention involved three tasks requiring the child to maintain attention on toy/s in either the presence or absence of competition or a distractor. The primary outcome was the child’s latency of distractibility when attention was focused on a toy. The primary outcome was available for seventy-three of the 120 infants that were eligible to participate. There was no evidence of a difference between groups in the latency of distractibility (adjusted mean difference: 0·08 s, 95 % CI –0·81, 0·97; P = 0·86). Enteral DHA supplementation did not result in improved attention in infants born preterm at 18 months’ corrected age.
We investigated whether adults with attention-deficit/hyperactivity disorder (ADHD) show pseudoneglect—preferential allocation of attention to the left visual field (LVF) and a resulting slowing of mean reaction times (MRTs) in the right visual field (RVF), characteristic of neurotypical (NT) individuals —and whether lateralization of attention is modulated by presentation speed and incentives.
Fast Task, a four-choice reaction-time task where stimuli were presented in LVF or RVF, was used to investigate differences in MRT and reaction time variability (RTV) in adults with ADHD (n = 43) and NT adults (n = 46) between a slow/no-incentive and fast/incentive condition. In the lateralization analyses, pseudoneglect was assessed based on MRT, which was calculated separately for the LVF and RVF for each condition and each study participant.
Adults with ADHD had overall slower MRT and increased RTV relative to NT. MRT and RTV improved under the fast/incentive condition. Both groups showed RVF-slowing with no between-group or between-conditions differences in RVF-slowing.
Adults with ADHD exhibited pseudoneglect, a NT pattern of lateralization of attention, which was not attenuated by presentation speed and incentives.
The nature and degree of cognitive impairments in schizoaffective disorder is not well established. The aim of this meta-analysis was to characterise cognitive functioning in schizoaffective disorder and compare it with cognition in schizophrenia and bipolar disorder. Schizoaffective disorder was considered both as a single category and as its two diagnostic subtypes, bipolar and depressive disorder.
Following a thorough literature search (468 records identified), we included 31 studies with a total of 1685 participants with schizoaffective disorder, 3357 with schizophrenia and 1095 with bipolar disorder. Meta-analyses were conducted for seven cognitive variables comparing performance between participants with schizoaffective disorder and schizophrenia, and between schizoaffective disorder and bipolar disorder.
Participants with schizoaffective disorder performed worse than those with bipolar disorder (g = −0.30) and better than those with schizophrenia (g = 0.17). Meta-analyses of the subtypes of schizoaffective disorder showed cognitive impairments in participants with the depressive subtype are closer in severity to those seen in participants with schizophrenia (g = 0.08), whereas those with the bipolar subtype were more impaired than those with bipolar disorder (g = −0.23) and less impaired than those with schizophrenia (g = 0.29). Participants with the depressive subtype had worse performance than those with the bipolar subtype but this was not significant (g = 0.25, p = 0.05).
Cognitive impairments increase in severity from bipolar disorder to schizoaffective disorder to schizophrenia. Differences between the subtypes of schizoaffective disorder suggest combining the subtypes of schizoaffective disorder may obscure a study's results and hamper efforts to understand the relationship between this disorder and schizophrenia or bipolar disorder.