To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) pandemic, with its impact on our way of life, is affecting our experiences and mental health. Notably, individuals with mental disorders have been reported to have a higher risk of contracting SARS-CoV-2. Personality traits could represent an important determinant of preventative health behaviour and, therefore, the risk of contracting the virus.
We examined overlapping genetic underpinnings between major psychiatric disorders, personality traits and susceptibility to SARS-CoV-2 infection.
Linkage disequilibrium score regression was used to explore the genetic correlations of coronavirus disease 2019 (COVID-19) susceptibility with psychiatric disorders and personality traits based on data from the largest available respective genome-wide association studies (GWAS). In two cohorts (the PsyCourse (n = 1346) and the HeiDE (n = 3266) study), polygenic risk scores were used to analyse if a genetic association between, psychiatric disorders, personality traits and COVID-19 susceptibility exists in individual-level data.
We observed no significant genetic correlations of COVID-19 susceptibility with psychiatric disorders. For personality traits, there was a significant genetic correlation for COVID-19 susceptibility with extraversion (P = 1.47 × 10−5; genetic correlation 0.284). Yet, this was not reflected in individual-level data from the PsyCourse and HeiDE studies.
We identified no significant correlation between genetic risk factors for severe psychiatric disorders and genetic risk for COVID-19 susceptibility. Among the personality traits, extraversion showed evidence for a positive genetic association with COVID-19 susceptibility, in one but not in another setting. Overall, these findings highlight a complex contribution of genetic and non-genetic components in the interaction between COVID-19 susceptibility and personality traits or mental disorders.
To estimate population-based rates and to describe clinical characteristics of hospital-acquired (HA) influenza.
US Influenza Hospitalization Surveillance Network (FluSurv-NET) during 2011–2012 through 2018–2019 seasons.
Patients were identified through provider-initiated or facility-based testing. HA influenza was defined as a positive influenza test date and respiratory symptom onset >3 days after admission. Patients with positive test date >3 days after admission but missing respiratory symptom onset date were classified as possible HA influenza.
Among 94,158 influenza-associated hospitalizations, 353 (0.4%) had HA influenza. The overall adjusted rate of HA influenza was 0.4 per 100,000 persons. Among HA influenza cases, 50.7% were 65 years of age or older, and 52.0% of children and 95.7% of adults had underlying conditions; 44.9% overall had received influenza vaccine prior to hospitalization. Overall, 34.5% of HA cases received ICU care during hospitalization, 19.8% required mechanical ventilation, and 6.7% died. After including possible HA cases, prevalence among all influenza-associated hospitalizations increased to 1.3% and the adjusted rate increased to 1.5 per 100,000 persons.
Over 8 seasons, rates of HA influenza were low but were likely underestimated because testing was not systematic. A high proportion of patients with HA influenza were unvaccinated and had severe outcomes. Annual influenza vaccination and implementation of robust hospital infection control measures may help to prevent HA influenza and its impacts on patient outcomes and the healthcare system.
Studying phenotypic and genetic characteristics of age at onset (AAO) and polarity at onset (PAO) in bipolar disorder can provide new insights into disease pathology and facilitate the development of screening tools.
To examine the genetic architecture of AAO and PAO and their association with bipolar disorder disease characteristics.
Genome-wide association studies (GWASs) and polygenic score (PGS) analyses of AAO (n = 12 977) and PAO (n = 6773) were conducted in patients with bipolar disorder from 34 cohorts and a replication sample (n = 2237). The association of onset with disease characteristics was investigated in two of these cohorts.
Earlier AAO was associated with a higher probability of psychotic symptoms, suicidality, lower educational attainment, not living together and fewer episodes. Depressive onset correlated with suicidality and manic onset correlated with delusions and manic episodes. Systematic differences in AAO between cohorts and continents of origin were observed. This was also reflected in single-nucleotide variant-based heritability estimates, with higher heritabilities for stricter onset definitions. Increased PGS for autism spectrum disorder (β = −0.34 years, s.e. = 0.08), major depression (β = −0.34 years, s.e. = 0.08), schizophrenia (β = −0.39 years, s.e. = 0.08), and educational attainment (β = −0.31 years, s.e. = 0.08) were associated with an earlier AAO. The AAO GWAS identified one significant locus, but this finding did not replicate. Neither GWAS nor PGS analyses yielded significant associations with PAO.
AAO and PAO are associated with indicators of bipolar disorder severity. Individuals with an earlier onset show an increased polygenic liability for a broad spectrum of psychiatric traits. Systematic differences in AAO across cohorts, continents and phenotype definitions introduce significant heterogeneity, affecting analyses.
Dating of ancient permafrost is essential for understanding long-term permafrost stability and interpreting palaeoenvironmental conditions but presents substantial challenges to geochronology. Here, we apply four methods to permafrost from the megaslump at Batagay, east Siberia: (1) optically stimulated luminescence (OSL) dating of quartz, (2) post-infrared infrared-stimulated luminescence (pIRIR) dating of K-feldspar, (3) radiocarbon dating of organic material, and (4) 36Cl/Cl dating of ice wedges. All four chronometers produce stratigraphically consistent and comparable ages. However, OSL appears to date Marine Isotope Stage (MIS) 3 to MIS 2 deposits more reliably than pIRIR, whereas the latter is more consistent with 36Cl/Cl ages for older deposits. The lower ice complex developed at least 650 ka, potentially during MIS 16, and represents the oldest dated permafrost in western Beringia and the second-oldest known ice in the Northern Hemisphere. It has survived multiple interglaciations, including the super-interglaciation MIS 11c, though a thaw unconformity and erosional surface indicate at least one episode of permafrost thaw and erosion occurred sometime between MIS 16 and 6. The upper ice complex formed from at least 60 to 30 ka during late MIS 4 to 3. The sand unit above the upper ice complex is dated to MIS 3–2, whereas the sand unit below formed at some time between MIS 4 and 16.
A problem of definition immediately confronts any discussion of the disciplinary history of Egyptology in Germany: what is Germany? Unlike Great Britain and France, which were established fairly early in their histories as centralised nations, Germany was organised as a loose federation of individual states until the late nineteenth century. Moreover, German language and culture were also to be found beyond the borders of the Holy Roman Empire (from the tenth century to 1806) and the North German Confederation (1866–71). Not until the foundation of the German Empire in 1871 did Germany emerge, under Prussian leadership, as a modern state with clearly defined borders. These problems of definition are compounded by the existence of the Austro-Hungarian Empire from 1806 to 1918, a multi-ethnic, but German-dominated, nation, as well as by the loss of German territories to other states in 1919 and 1945, following the country’s defeat in the First and Second World Wars (see Map 2a–d). Thus, dealing with Egyptology in ‘Germany’ can present problems, in so far as the physical and intellectual extent of the region varied.
Promulgating a continuum model of mental health and mental illness has been proposed as a way to reduce stigma by decreasing notions of differentness. This systematic review and meta-analysis examines whether continuum beliefs are associated with lower stigma, and whether continuum interventions reduce stigma.
Following a pre-defined protocol (PROSPERO: CRD42019123606), we searched three electronic databases (PubMed, Web of Science, and PsycINFO) yielding 6726 studies. After screening, we included 33 studies covering continuum beliefs, mental illness, and stigma. Of these, 13 studies were included in meta-analysis.
Continuum beliefs are consistently associated with lower stigma. Interventions were effective at manipulating continuum beliefs but differ in their effects on stigmatising attitudes.
We discuss whether and to what extent attitudes towards people with mental illness can be improved by providing information on a mental health-mental illness continuum. It appeared to be relevant whether interventions promoted a feeling of ‘us’ and a process of identification with the person with mental illness. We discuss implications for the design of future interventions.
The success rate for translation of newly engineered medical technologies into clinical practice is low. Traversing the “translational valleys of death” requires a high level of knowledge of the complex landscape of technical, ethical, regulatory, and commercialization challenges along a multi-agency path of approvals. The Indiana Clinical and Translational Sciences Institute developed a program targeted at increasing that success rate through comprehensive training, education, and resourcing. The Medical Technology Advance Program (MTAP) provides technical, educational, and consultative assistance to investigators that leverages partnerships with experts in the health products industry to speed progress toward clinical implementation. The training, resourcing, and guidance are integrated through the entire journey of medical technology translation. Investigators are supported through a set of courses that cover bioethics, ethical engineering, preclinical and clinical study design, regulatory submissions, entrepreneurship, and commercialization. In addition to the integrated technical and educational resources, program experts provide direct consultation for planning each phase along the life cycle of translation. Since 2008, nearly 200 investigators have gained assistance from MTAP resulting in over 100 publications and patents. This support via medicine–engineering–industry partnership provides a unique and novel opportunity to expedite new medical technologies into clinical and product implementation.
Dopaminergic imaging is an established biomarker for dementia with Lewy bodies, but its diagnostic accuracy at the mild cognitive impairment (MCI) stage remains uncertain.
To provide robust prospective evidence of the diagnostic accuracy of dopaminergic imaging at the MCI stage to either support or refute its inclusion as a biomarker for the diagnosis of MCI with Lewy bodies.
We conducted a prospective diagnostic accuracy study of baseline dopaminergic imaging with [123I]N-ω-fluoropropyl-2β-carbomethoxy-3β-(4-iodophenyl)nortropane single-photon emission computerised tomography (123I-FP-CIT SPECT) in 144 patients with MCI. Images were rated as normal or abnormal by a panel of experts with access to striatal binding ratio results. Follow-up consensus diagnosis based on the presence of core features of Lewy body disease was used as the reference standard.
At latest assessment (mean 2 years) 61 patients had probable MCI with Lewy bodies, 26 possible MCI with Lewy bodies and 57 MCI due to Alzheimer's disease. The sensitivity of baseline FP-CIT visual rating for probable MCI with Lewy bodies was 66% (95% CI 52–77%), specificity 88% (76–95%) and accuracy 76% (68–84%), with positive likelihood ratio 5.3.
It is over five times as likely for an abnormal scan to be found in probable MCI with Lewy bodies than MCI due to Alzheimer's disease. Dopaminergic imaging appears to be useful at the MCI stage in cases where Lewy body disease is suspected clinically.
Reimbursement agencies are increasingly using patient preference data to evaluate health technologies. Discrete choice experiments (DCE) are commonly used to elicit patient preferences, but they require large sample sizes to obtain meaningful results. For this reason, it is often not possible to use DCE to elicit patient preferences in rare diseases. This study assessed a swing weighting method for eliciting preferences from a small sample: patients with immunoglobulin A nephropathy (IgAN) in the United States (US) and China.
Attributes and levels were selected based on a review of clinical studies and qualitative research on patients. Computer-assisted, interview-based swing weighting exercises were piloted in a focus group with five participants each from the US and China. Preferences were then elicited in interviews with twenty-five patients in the US and fifteen patients in China. Consistency tests were used to assess internal validity. Qualitative data were collected on the reasons for patients’ preferences.
Preference consistency: The weights for one attribute were elicited twice. The difference between initial and consistency test weights was not statistically significant (p < 0.1), although this may partly reflect the small sample sizes. Trade-offs: Qualitative data were used to demonstrate the validity of interpreting participants’ ratings as trade-offs. Using the partial value function for end-stage renal disease as an example, qualitative data demonstrated that patients were able to provide face-valid reasons for different shaped, non-linear preference functions. Robustness of treatment evaluation: Three hypothetical treatment profiles (using the attribute swings) were constructed. Preferences for these treatment profiles were robust to variations in patients’ preferences; all patients preferred one specific profile. This finding was not sensitive to changes in weights.
This study supports the feasibility of collecting valid and robust preference data from small groups of patients using swing weighting. Further work could be done to test the performance of swing weighting in larger sample sizes.
Background: As more US hospitals report antibiotic utilization to the CDC, standardized antimicrobial administration ratios (SAARs) derived from patient care unit-based antibiotic utilization data will increasingly be used to guide local antibiotic stewardship interventions. Location-based antibiotic utilization surveillance data are often utilized given the relative ease of ascertainment. However, aggregating antibiotic use data on a unit basis may have variable effects depending on the number of clinical teams providing care. In this study, we examined antibiotic utilization from units at a tertiary-care hospital to illustrate the potential challenges of using unit-based antibiotic utilization to change individual prescribing. Methods: We used inpatient pharmacy antibiotic use administration records at an adult tertiary-care academic medical center over a 6-month period from January 2019 through June 2019 to describe the geographic footprints and AU of medical, surgical, and critical care teams. All teams accounting for at least 1 patient day present on each unit during the study period were included in the analysis, as were all teams prescribing at least 1 antibiotic day of therapy (DOT). Results: The study population consisted of 24 units: 6 ICUs (25%) and 18 non-ICUs (75%). Over the study period, the average numbers of teams caring for patients in ICU and non-ICU wards were 10.2 (range, 3.2–16.9) and 13.7 (range, 10.4–18.9), respectively. Units were divided into 3 categories by the number of teams, accounting for ≥70% of total patient days present (Fig. 1): “homogenous” (≤3), “pauciteam” (4–7 teams), and “heterogeneous” (>7 teams). In total, 12 (50%) units were “pauciteam”; 7 (29%) were “homogeneous”; and 5 (21%) were “heterogeneous.” Units could also be classified as “homogenous,” “pauciteam,” or “heterogeneous” based on team-level antibiotic utilization or DOT for specific antibiotics. Different patterns emerged based on antibiotic restriction status. Classifying units based on vancomycin DOT (unrestricted) exhibited fewer “heterogeneous” units, whereas using meropenem DOT (restricted) revealed no “heterogeneous” units. Furthermore, the average number of units where individual clinical teams prescribed an antibiotic varied widely (range, 1.4–12.3 units per team). Conclusions: Unit-based antibiotic utilization data may encounter limitations in affecting prescriber behavior, particularly on units where a large number of clinical teams contribute to antibiotic utilization. Additionally, some services prescribing antibiotics across many hospital units may be minimally influenced by unit-level data. Team-based antibiotic utilization may allow for a more targeted metric to drive individual team prescribing.
Background: Handshake antibiotic stewardship is an effective but resource-intensive strategy for reducing antimicrobial utilization. At larger hospitals, widespread implementation of direct handshake rounds may be constrained by available resources. To optimize resource utilization and mirror handshake antimicrobial stewardship, we designed an indirect feedback model utilizing existing team pharmacy infrastructure. Methods: The antibiotic stewardship program (ASP) utilized the plan-do-study-act (PDSA) improvement methodology to implement an antibiotic stewardship intervention centered on antimicrobial utilization feedback and patient-level recommendations to optimize antimicrobial utilization. The intervention included team-based antimicrobial utilization dashboard development, biweekly antimicrobial utilization data feedback of total antimicrobial utilization and select drug-specific antimicrobial utilization, and twice weekly individualized review by ASP staff of all patients admitted to the 5 hospitalist teams on antimicrobials with recommendations (discontinuation, optimization, etc) relayed electronically to team-based pharmacists. Pharmacists were to communicate recommendations as an indirect surrogate for handshake antibiotic stewardship. As reviewer duties expanded to include a rotation of multiple reviewers, a standard operating procedure was created. A closed-loop communication model was developed to ensure pharmacist feedback receipt and to allow intervention acceptance tracking. During implementation optimization, a team pharmacist-champion was identified and addressed communication lapses. An outcome measure of days of therapy per 1,000 patient days present (DOT/1,000 PD) and balance measure of in-hospital mortality were chosen. Implementation began April 5, 2019, and data were collected through October 31, 2019. Preintervention comparison data spanned December 2017 to April 2019. Results: Overall, 1,119 cases were reviewed by the ASP, of whom 255 (22.8%) received feedback. In total, 236 of 362 recommendations (65.2%) were implemented (Fig. 1). Antimicrobial discontinuation was the most frequent (147 of 362, 40.6%), and most consistently implemented (111 of 147, 75.3%), recommendation. The DOT/1,000 PD before the intervention compared to the same metric after intervention remained unchanged (741.1 vs 725.4; P = .60) as did crude in-hospital mortality (1.8% vs 1.7%; P = .76). Several contributing factors were identified: communication lapses (eg, emails not received by 2 pharmacists), intervention timing (mismatch of recommendation and rounding window), and individual culture (some pharmacists with reduced buy-in selectively relayed recommendations). Conclusion: Although resource efficient, this model of indirect handshake did not significantly impact total antimicrobial utilization. Through serial PDSA cycles, implementation barriers were identified that can be addressed to improve the feedback process. Communication, expectation management, and interpersonal relationship development emerged as critical issues contributing to poor recommendation adherence. Future PDSA cycles will focus on streamlining processes to improve communication among stakeholders.
Background: The current NHSN guideline states that positive results from both blood cultures and non–culture-based testing (NCT) methodologies are to be used for central-line–associated bloodstream infection (CLABSI) surveillance determination. A positive NCT result in the absence of blood cultures or negative blood cultures in patients who meet CLABSI criteria is to be reported to NHSN. However, the reporting criteria for NCT changed starting January 1, 2020: If NCT is positive and the blood culture is negative 2 days before or 1 day after, the NCT result is not reported. If the NCT is positive with no blood culture within the 3-day window period, the NCT result is reported in patients who meet CLABSI criteria. We estimated the impact of the new NCT criteria on CLABSI numbers and rates compared to the previous definition. Methods: At our facility, the T2Candida Panel (T2), an NCT, was implemented for clinical use for the detection of early candidemia and invasive candidiasis. The T2 is a rapid molecular test performed directly on blood samples to detect DNA of 5 Candida spp: C. albicans/C. tropicalis, C. glabrata/C. krusei, and C. parapsilosis. In this retrospective study performed at an 877-bed teaching hospital in Detroit, we reviewed the impact of discordant T2 results (positive T2 with negative blood cultures) on CLABSI rates from January 1, 2017, to September 30, 2019, based on the current definition, and we applied the revised criteria to estimate the new CLABSI numbers and rates for the same period. Results: Of 343 positive T2 results, 202 (58.9%) were discordant and qualified for CLABSI determination during the study period. Of these, 109 (54%) met CLABSI criteria based on the current definition and 11 (5%) met CLABSI criteria using the new definition (proportional P < .001), resulting in an 89.9% reduction. The CLABSI rate per 1,000 central-line days, which includes discordant T2 results, based on the current and new NCT criteria, are listed in Table 1. Conclusions: In institutions that utilize NCT such as T2, application of the new 2020 NCT NHSN definition would significantly reduce the CLABSI number and have a significant impact on the CLABSI rates and standardized infection ratios (SIRs).
The interpersonal theory of suicide (IPTS) is one of the most intensively researched contemporary theories on the development of suicidal ideation and behaviour. However, there is a lack of carefully conducted prospective studies.
To evaluate the main predictions of the IPTS regarding the importance of perceived burdensomeness, thwarted belongingness and capability for suicide in predicting future suicide attempts in a prospective design.
Psychiatric in-patients (n = 308; 53.6% (n = 165) female; mean age 36.82 years, s.d. = 14.30, range 18–81) admitted for severe suicidal ideation (n = 145, 47.1%) or a suicide attempt completed self-report measures of thwarted belongingness, perceived burdensomeness, capability for suicide, hopelessness, depression and suicidal ideation as well as interviews on suicide intent and suicide attempts and were followed up for 12 months. Logistic regression and receiver operating characteristics (ROC) analysis were conducted.
The interaction of perceived burdensomeness, thwarted belongingness and capability for suicide was not predictive of future suicide attempts, but perceived burdensomeness showed a significant main effect (z = 3.49, P < 0.01; OR = 2.34, 95% CI 1.59–3.58) and moderate performance in screening for future suicide attempts (area under the curve AUC = 0.729, P < 0.01).
The results challenge the theoretical validity of the IPTS and its clinical utility – at least within the methodological limitations of the current study. Yet, findings underscore the importance of perceived burdensomeness in understanding suicidal ideation and behaviour.
We analysed thirty-five 400-m2 plots encompassing forest, savanna and intermediate vegetation types in an ecotonal area in Ghana, West Africa. Across all plots, fire frequency was over a period of 15 years relatively uniform (once in 2–4 years). Although woodlands were dominated by species typically associated with savanna-type formations, and with forest formations dominated by species usually associated with closed canopies, these associations were non-obligatory and with a discrete non-specialized species grouping also identified. Across all plots, crown area index, stem basal area and above-ground biomass were positively associated with higher soil exchangeable potassium and silt contents: this supporting recent suggestions of interplays between potassium and soil water storage potential as a significant influence on tropical vegetation structure. We also found an average NDVI cover increase of ~0.15% year−1 (1984–2011) with plots dominated by non-specialized species increasing more than those dominated by either forest- or savanna-affiliated species. Our results challenge the traditional view of a simple forest vs. savanna dichotomy controlled by fire, and with our newly identified third non-specialized species grouping also potentially important in understanding ecotonal responses to climate change.
Crocodilians are distributed widely through the tropics and subtropics, and several species pose a substantial threat to human life. This has important implications for human safety and crocodilian conservation. Understanding the drivers of crocodilian attacks on people could help minimize future attacks and inform conflict management. Crocodilian attacks follow a seasonal pattern in many regions, but there has been limited analysis of the relationship between attack occurrence and fine-scale contemporaneous environmental conditions. We use methods from environmental niche modelling to explore the relationships between attacks on people and abiotic predictors at a daily temporal resolution for the Nile crocodile Crocodylus niloticus in South Africa and Eswatini (formerly Swaziland), and the American alligator Alligator mississippiensis in Florida, USA. Our results indicate that ambient daily temperature is the most important abiotic temporal predictor of attack occurrence for both species, with attack likelihood increasing markedly when mean daily temperatures exceed 18 °C and peaking at 28 °C. It is likely that this relationship is explained partially by human propensity to spend time in and around water in warmer weather but also by the effect of temperature on crocodilian hunting behaviour and physiology, especially the ability to digest food. We discuss the potential of our findings to contribute to the management of crocodilians, with benefits for both human safety and conservation, and the application of environmental niche modelling for understanding human–wildlife conflicts involving both ectotherms and endotherms.
This paper presents data obtained in a one-day census investigation in five European countries (Austria, Hungary, Romania, Slovakia, Slovenia). The census forms were filled in for 4191 psychiatric inpatients. Concerning legal status, 11.2% were hospitalised against their will (committed) and 21.4% were treated in a ward with locked doors. There was only a small correlation between commitment and treatment in a locked ward. More frequent than treatment of committed patients in locked wards was treatment of committed patients in open wards (Austria, Hungary) and treatment of voluntary patients in closed wards (Slovakia, Slovenia). Concerning employment, 27.7% of patients aged 18–60 held a job before admission. The vast majority of patients (84.8%) had a length of stay of less than 3 months. A comparison of these data with the results of a study performed in 1996 and using the same method shows a decrease of rates of long-stay patients. In 1996 the rates of employment were significantly higher in Romania (39.3%) and Slovakia (42.5%) compared to Austria (30.7%). These differences disappeared in 1999 due to decreasing rates of employment in Romania and Slovakia. The numbers of mental health personnel varies between types of institution (university or non-university) and countries, being highest in Austria and lowest in Romania. A considerable increase in the numbers of staff was found in Slovakia.
Anabolic androgenic steroids (AAS) are derived by chemical manipulation of the testosterone molecule. The specified category of drugs produces anabolic, androgenic and psycho-active effects including elevated aggressive, hostile, violent and anti social behavior.
The objective of this case report observational study was to evaluate the possible psychological consequences of AS use in the twin user of each pair, compared with the non-user twin.
We studied two pairs of male monozygotic twins: one pair 24 years old and the other 31 years old, with absolute genome and phenotype similarity. One of the twins of each pair used AAS while the other did not. Both pairs lived in Hellenic provincial towns and followed a common training and nutrition regime. The psychometric instruments used were the Symptoms Check List-90 (SCL-90) and the Hostility and Direction of Hostility Questionnaire (HDHQ). The psychometric evaluations took place within a time interval of 6 months.
The study found high levels of aggressiveness, hostility, anxiety and paranoid ideation in the twins who used AS. The non-user twins showed no deviation from their initial status.
The use of AAS induced several important psychiatric changes in monozygotic twins which were not present in the twin who did not use AAS.
Cognitive dysfunction is increasingly considered to be the strongest clinical predictor of poor long-term outcome in schizophrenia. Associations have been found between the severity of cognitive deficits and social dysfunction, impairments in independent living, occupational limitations, and disturbances in quality of life (QOL).
In this cross-sectional study, the relationships of cognitive deficits and treatment outcomes in terms of QOL, needs, and psychosocial functioning were examined in 60 outpatients with schizophrenia who had a duration of illness over 2 years and had been treated with either clozapine or olanzapine for at least 6 months.
The present study suggests that cognitive functioning might be a predictor of work functioning/independent living outcome in stabilized patients with schizophrenia: deficits of visual memory and working memory were negatively associated with occupational functioning, and older patients lived independently and/or in a stable partnership more often. The patients' assessments of QOL and needs for care did not show any significant associations with cognitive functioning.
These findings suggest that cognitive functioning is a key determinant of work functioning/independent living for stable outpatients with schizophrenia.