To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Negative attitudes towards treatment hospitalization of patients with borderline personality disorder (BPD) exist among mental health clinicians. These attitudes could affect the treatment administered to these patients, the length of hospitalization and its cost.
To establish recommendations for health officials regarding the hospitalization and treatment of BPD patients in order to shorten the length of hospitalization of patients with BPD and improve the quality of their treatment.
A thorough examination of the attitudes towards BPD patients among four professions (psychiatrists, psychologists, social workers and nurses) in four hospitals, and of the directors of the hospitals and of several wards, so as to evaluate their policy regarding the admission and the treatment of BPD patients.
We administered questionnaires on explicit and implicit attitudes towards these patients to 710 clinicians in 4 hospitals in Israel, and interviewed the hospitals’ directors and several ward directors. We collected data on the hospitalizations of patients with BPD during the period 2009-2011 and analyzed differences on these measures between professions and hospitals.
Nurses and psychiatrists had the most negative attitudes towards these patients, and differences were noted between hospitals. While hospital D was characterized by more negative attitudes, less negative attitudes were found in hospital A, and accordingly the longer were admissions and hospitalizations’ costs in this hospital.
Nurses and psychiatrists are likely to express negative attitudes towards these patients. Possibly, the directors’ attitudes and policy influenced lengths of the hospitalizations and costs of treatment of BPD patients.
Current guidelines recommend highly specialized care for patients with severe personality disorders (PDs). However, there is little knowledge about how to detect older patients with severe PDs. The aim of the current study was to develop an age-specific tool to detect older adults with severe PDs for highly specialized mental health care.
In a Delphi study, a tool to detect adults with severe PDs for highly specialized mental health care was adjusted for older adults based on expert opinion. Subsequently, the psychometric properties of the age-specific tool were evaluated.
The psychometric part of the study was performed in two Dutch highly specialized centers for PDs in older adults.
Patients (N = 90) from two highly specialized centers on PDs in older adults were enrolled.
The age-specific tool was evaluated using clinical judgment as the gold standard.
The Delphi study resulted in an age-specific tool, consisting of seven items to detect older adults with severe PDs for highly specialized mental health care. Psychometric properties of this tool were evaluated. Receiver operating curve analysis showed that the questionnaire was characterized by sufficient diagnostic accuracy. Internal consistency of the tool was sufficient and inter-rater reliability was moderate.
An age-specific tool to detect older adults with severe PDs was developed based on expert opinion. Psychometric properties were evaluated showing sufficient diagnostic accuracy. The tool may preliminarily be used in mental health care to detect older adults with severe PDs to refer them to highly specialized care in an early phase.
Introduction: 9-1-1 telecommunicators receive minimal education on agonal breathing, often resulting in unrecognized out-of-hospital cardiac arrest (OHCA). We successfully piloted an educational intervention that significantly improved telecommunicators’ OHCA recognition and bystander CPR rates in Ottawa. We sought to better understand the operations of Canadian 9-1-1 communications centers (CC) in preparation for a multi-centre study of this intervention. Methods: We conducted a National survey of all Canadian CCs. Survey domains included information on organizational structure, dispatch system used, education curriculum, and performance monitoring. It was peer-reviewed, translated in French, pilot-tested, and distributed electronically using a modified Dillman method. We designated respondents in each CC before distribution and used targeted follow-up and small incentives to increase response rate. Respondents also described functioning of neighboring CCs if known. Results: We received information from 51/51 provincial and 1/25 territorial CCs, representing 99.7% of the Canadian population. CCs largely utilize the Medical Dispatch Priority System (MPDS) platform (93%), many are Province/Ministry regulated (50%) and most require a High School diploma as minimum entry level education (78%). Telecommunicators receive initial in-class training (median 1.3 months, IQR 0.3-1.9; range 0.1-2.2), often followed by a preceptorship (84.4%) (median 1.0 months, IQR 0.7-1.7; range 0.4-6.0). Educational curriculum includes information on agonal breathing in 41% of CC, without audio examples in 34%. Among responding CCs, over 39,000 suspected OHCA 9-1-1 calls are received annually. Few CCs maintain local performance statistics on OHCA recognition (25%), bystander CPR rates (25%) or survival rates (50%). Most (97%) expressed interest in future research collaborations. Conclusion: Most Canadian telecommunicators receive no or minimal education in recognizing agonal breathing. Further training and improved OHCA monitoring may assist recognition and enhance outcomes.
The Brazilian Household Food Insecurity Measurement Scale (EBIA) has eight general/adult items applied in all households and six additional items exclusively asked in households with children and/or adolescents (HHCA). Continuing an investigation programme on the adequacy of model-based cut-off points for EBIA, the present study aims to: (i) explore the capacity of properly stratifying HHCA according to food insecurity (FI) severity level by applying only the eight ‘generic’ items; and (ii) compare it against the fourteen-item scale.
Latent class factor analysis (LCFA) models were applied to the answers to the eight general/adult items to identify latent groups corresponding to FI levels and optimal group-separating cut-off points. Analyses involved a thorough classification agreement evaluation and were performed at the national level and by macro-regions.
Data derived from the cross-sectional Brazilian National Household Sample Survey of 2013.
A nationally representative sample of 116 543 households.
In all households and investigated domains, LCFA detected four distinct household food (in)security groups (food security and three levels of severity of FI) and the same set of cut-off points (1/2, 4/5 and 6/7). Misclassification in the aggregate data was 0·66 % in adult-only households and 1·06 % in HHCA. Comparison of the scale reduced to eight items with the ‘original’ fourteen-item scale demonstrated consistency in the classification. In HHCA, the agreement between both classifications was 96·2 %.
Results indicate the eight ‘generic’ items in HHCA can be reliably used when it is not possible to apply the fourteen-item scale.
Introduction: Extracorporeal cardiopulmonary resuscitation (E-CPR) has been used successfully to increase survival in patients suffering from out-of-hospital cardiac arrest (OHCA). However, few OHCA patients can benefit from E-CPR since this procedure is only performed in dedicated centers. Prehospital triage systems have helped decrease mortality from other acute conditions, by directly transporting patients to dedicated centers, often bypassing primary care centers. Our study aimed to quantify the possible impact of a prehospital triage system on the proportion of E-CPR eligible patients transported to E-CPR centers. Methods: We used a registry of adult OHCA collected between 2010 and 2015 from the city of Montréal, Canada. Included patients were adults with non-traumatic witnessed OHCA refractory to 15 minutes of resuscitation. Using this cohort, we created 3 scenarios in which potential E-CPR candidates could be redirected to E-CPR centers. We used strict eligibility criteria in our first pair (e.g. age <60 years old, initial shockable rhythm), intermediate criteria in our second pair (e.g. age <65 years old, at least one shock given) and inclusive criteria in our third pair (e.g. age <70 years old, initial rhythm ≠ asystole). These 3 scenarios were compared to their counterpart in which patients would be transported to the closest hospital. The proportions of patients who would have been transported to an E-CPR centers were compared using McNemar’s test. To obtain a power of 99%, expecting 1% of discordant pairs and using a unilateral alpha of 0.83% (after Bonferroni correction), we needed to include at least 1000 patients. Results: A total of 3136 patients (2054 men and 982 women) with a mean age of 69 years (standard deviation 15) were included. In each simulation, prehospital redirection would have significantly increased the proportion of patients transported to an E-CPR center (pair 1: 1.3% vs 3.8%, p<0.001; pair 2: 2.6% vs 7.3%, p<0.001; pair 3: 7.6% vs 29.8%, p<0.001). Conclusion: In an urban setting, a prehospital triage system could triple the number of patients with refractory OHCA who would have an access to E-CPR. This implies that centers with E-CPR capability should prepare themselves accordingly for such a system to effectively improve survival following OHCA.
A recent outbreak of Q fever was linked to an intensive goat and sheep dairy farm in Victoria, Australia, 2012-2014. Seventeen employees and one family member were confirmed with Q fever over a 28-month period, including two culture-positive cases. The outbreak investigation and management involved a One Health approach with representation from human, animal, environmental and public health. Seroprevalence in non-pregnant milking goats was 15% [95% confidence interval (CI) 7–27]; active infection was confirmed by positive quantitative PCR on several animal specimens. Genotyping of Coxiella burnetii DNA obtained from goat and human specimens was identical by two typing methods. A number of farming practices probably contributed to the outbreak, with similar precipitating factors to the Netherlands outbreak, 2007-2012. Compared to workers in a high-efficiency particulate arrestance (HEPA) filtered factory, administrative staff in an unfiltered adjoining office and those regularly handling goats and kids had 5·49 (95% CI 1·29–23·4) and 5·65 (95% CI 1·09–29·3) times the risk of infection, respectively; suggesting factory workers were protected from windborne spread of organisms. Reduction in the incidence of human cases was achieved through an intensive human vaccination programme plus environmental and biosecurity interventions. Subsequent non-occupational acquisition of Q fever in the spouse of an employee, indicates that infection remains endemic in the goat herd, and remains a challenge to manage without source control.
To identify the prognostic significance of specific lymph node related characteristics for disease persistence and recurrence in patients with pre- or intra-operative evidence of neck metastases and no other risk factors.
Method and results
Sixty-eight patients were identified; 50 per cent had persistent or recurrent disease. All underwent the same treatment strategy. There were no statistically significant differences in any of the patient- or tumour-related parameters when patients with and without persistence or recurrence were compared. Patients with recurrent or persistent disease had significantly larger (>3 cm) metastatic lymph nodes, but there were no differences regarding other lymph node related parameters (i.e. number, extracapsular extension, number of lymph nodes with extracapsular extension, and central vs lateral neck location). On multivariate analysis, however, none of the parameters were predictive of persistent or recurrent disease.
In papillary thyroid carcinoma patients with no other risk factors, pre- or intra-operative evidence of cervical metastases was associated with a very high rate of disease persistence or recurrence. Specific lymph node characteristics were not shown to have prognostic significance.
The Geriatric Anxiety Scale (GAS; Segal et al. (Segal, D. L., June, A., Payne, M., Coolidge, F. L. and Yochim, B. (2010). Journal of Anxiety Disorders, 24, 709–714. doi:10.1016/j.janxdis.2010.05.002) is a self-report measure of anxiety that was designed to address unique issues associated with anxiety assessment in older adults. This study is the first to use item response theory (IRT) to examine the psychometric properties of a measure of anxiety in older adults.
A large sample of older adults (n = 581; mean age = 72.32 years, SD = 7.64 years, range = 60 to 96 years; 64% women; 88% European American) completed the GAS. IRT properties were examined. The presence of differential item functioning (DIF) or measurement bias by age and sex was assessed, and a ten-item short form of the GAS (called the GAS-10) was created.
All GAS items had discrimination parameters of 1.07 or greater. Items from the somatic subscale tended to have lower discrimination parameters than items on the cognitive or affective subscales. Two items were flagged for DIF, but the impact of the DIF was negligible. Women scored significantly higher than men on the GAS and its subscales. Participants in the young-old group (60 to 79 years old) scored significantly higher on the cognitive subscale than participants in the old-old group (80 years old and older).
Results from the IRT analyses indicated that the GAS and GAS-10 have strong psychometric properties among older adults. We conclude by discussing implications and future research directions.
Although post-weaning mortality (PWM) in pig farming is mainly due to the effect of pathogens, farm type or swine management are also directly or indirectly involved. In this work, we used null models and the partial least squares approach (PLS) to structural equation modelling, also known as PLS path modelling (PLS-PM), to explore whether farm type, swine management and pathogens, including porcine circovirus type 2, swine influenza virus, porcine reproductive and respiratory syndrome virus and Aujeszky's disease virus, directly or indirectly influenced PWM in 42 Spanish indoor pig farms. The null model analysis revealed that contact with multiple combinations of viruses could occur by chance. On the other hand, PLS-PM showed that farm characteristics do not influence virus infections, and thus neither farm type nor associated management practices shaped PWM due to pathogens. Accordingly, preventive programmes aimed at controlling PWM in intensive farming should prioritize the control of major pig pathogens.
The official introduction of the psychiatric diagnosis of personality disorders (PDs) in the Diagnostic and Statistical Manual of Mental Disorders (DSM) began in 1952 with the publication of the first edition (American Psychiatric Association, 1952). DSM-I contained 12 main types of PDs with a total description for all types in only two paragraphs. In the following DSM-II (American Psychiatric Association, 1968), just 10 specific types of PDs were described, including a very brief general definition of PDs. The DSM-III (American Psychiatric Association, 1980) included a significant paradigm shift from the medical model by incorporating the design of a multi-axial approach, in which the combinations of symptoms of more than five primary axes were used to describe the pathological state and formulate the diagnosis. Notably, the PDs were placed on a separate axis (Axis II) to distinguish their long-standing nature from the more episodic clinical disorders placed on Axis I. PDs were recognized as important formal diagnoses and included a more comprehensive listing of polythetic diagnostic criteria for each specific PD.
Improving health through better nutrition of the population may contribute to enhanced efficiency and sustainability of healthcare systems. A recent expert meeting investigated in detail a number of methodological aspects related to the discipline of nutrition economics. The role of nutrition in health maintenance and in the prevention of non-communicable diseases is now generally recognised. However, the main scope of those seeking to contain healthcare expenditures tends to focus on the management of existing chronic diseases. Identifying additional relevant dimensions to measure and the context of use will become increasingly important in selecting and developing outcome measurements for nutrition interventions. The translation of nutrition-related research data into public health guidance raises the challenging issue of carrying out more pragmatic trials in many areas where these would generate the most useful evidence for health policy decision-making. Nutrition exemplifies all the types of interventions and policy which need evaluating across the health field. There is a need to start actively engaging key stakeholders in order to collect data and to widen health technology assessment approaches for achieving a policy shift from evidence-based medicine to evidence-based decision-making in the field of nutrition.
To study the molecular epidemiology of vancomycin-resistant Enterococcus (VRE) colonization and to identify modifiable risk factors among patients with hematologic malignancies.
A hematology-oncology unit with high prevalence of VRE colonization.
Patients with hematologic malignancies and hematopoietic stem cell transplantation recipients admitted to the hospital.
Patients underwent weekly surveillance by means of perianal swabs for VRE colonization and, if colonized, were placed in contact isolation. We studied the molecular epidemiology in fecal and blood isolates by pulsed-field gel electrophoresis over a 1-year period. We performed a retrospective case-control study over a 3-year period. Cases were defined as patients colonized by VRE, and controls were defined as patients negative for VRE colonization. Case patients and control patients were matched by admitting service and length of observation time.
Molecular genotyping demonstrated the primarily polyclonal nature of VRE isolates. Colonization occurred at a median of 14 days. Colonized patients were characterized by longer hospital admissions. Previous use of ceftazidime was associated with VRE colonization (P < .001), while use of intravenous vancomycin and antibiotics with anaerobic activity did not emerge as a risk factor. There was no association with neutropenia or presence of colonic mucosal disruption, and severity of illness was similar in both groups.
Molecular studies showed that in the majority of VRE-colonized patients the strains were unique, arguing that VRE acquisition was sporadic rather than resulting from a common source of transmission. Patient-specific factors, including prior antibiotic exposure, rather than breaches in infection control likely predict for risk of fecal VRE colonization.
The Center for Research on Interface Structures and Phenomena (CRISP) is a National Science Foundation (NSF) Materials Research Science and Engineering Center (MRSEC). CRISP is a partnership between Yale University, Southern Connecticut State University (SCSU) and Brookhaven National Laboratory. A main focus of CRISP research is complex oxide interfaces that are prepared using epitaxial techniques, including molecular beam epitaxy (MBE). Complex oxides exhibit a wealth of electronic, magnetic and chemical behaviors, and the surfaces and interfaces of complex oxides can have properties that differ substantially from those of the corresponding bulk materials. CRISP employs this research program in a concerted way to educate students at all levels. CRISP has constructed a robust MBE apparatus specifically designed for safe and productive use by undergraduates. Students can grow their own samples and then characterize them with facilities at both Yale and SCSU, providing a complete research and educational experience. This paper will focus on the implementation of the CRISP Teaching MBE facility and its use in the study of the synthesis and properties of the crystalline oxide-silicon interface.
Background and objective: The arterial thermodilution technique offers the ability to measure cardiac output using only central venous and arterial catheters. However, the technique has been reported to overestimate cardiac output because of a higher loss of cold indicator due to the increased distance between the sites of injection and measurement. In this study, the two techniques were compared with respect to conditions of low cardiac output in which a longer passage time may further increase loss of indicator.
Methods: Seventeen anaesthetized dogs were studied during hypovolaemic shock and fluid resuscitation. Cardiac output measurements were carried out simultaneously by arterial and pulmonary artery thermodilution techniques.
Results: One-hundred-and-two measurements were performed. The mean cardiac output was 2.28 ± 1.4 L min−1 by the pulmonary arterial technique and 2.29 ± 1.56 L min−1 by the arterial thermodilution technique. The correlation coefficient between the two measurements was 0.95, the precision −0.04 ± 0.41 L min−1 and the limits of agreement from −0.86 to 0.78 L min−1. The agreement was also consistent at low cardiac outputs.
Conclusions: The arterial thermodilution technique may serve as a less invasive cardiac output monitor in conditions of severe bleeding and shock.
“One of the more unfortunate things about the Bakke case is that it became the vehicle for educating, or should I say miseducating, the public about affirmative action. The public learned about affirmative action almost, literally, for the first time through … ten-second sound bites on television, with people polarized against one another.”
Eleanor Holmes Norton, while a member of the Equal Employment Opportunity Commission (in Blackside, 1989)
“[F]ollowing Webster, some network reporters suggested a lockup, giving each reporter five minutes … to study the decision … so they could all report the decision more responsibly.”
Tim O'Brien, ABC News
On June 28, 1978, the Supreme Court issued its much anticipated ruling in the case of Regents of the University of California v. Bakke. Allan Bakke, a white male, claimed that he was discriminated against by the medical school at the University of California at Davis (UC-Davis) because of his race. The celebrated case marked the Court's first full-scale effort to address the legality of publicly promulgated affirmative action programs, in this instance in the context of admissions processes at a professional school. More than a decade later, on July 3, 1989, the Court issued its decision in the similarly anticipated case of Webster v. Reproductive Health Services, dealing with the constitutionality of several provisions of a Missouri law that regulated and restricted a woman's right to obtain an abortion. This time the Court was not working on a clean slate, however, since it had revisited the issue of abortion rights many times in the wake of the landmark Roe v. Wade ruling in 1973 overturning a Texas antiabortion statute.
“They're more driven to stories that will produce ratings, and, therefore, they may be evaluating stories not on the basis of their importance, but how they'll play – whether it meets sort of a bar-stool test, whether people will fall off their bar-stools when they see the story coming on television.”
Carl Stern, former NBC news correspondent
The data we have presented throughout this volume make very clear that the networks' primary interest in the Court is focused on its docket and the decisions that are handed down each term. Further, as chapter 5 has illustrated, the Court's rulings in the terms' leading cases were the primary focal point of network news coverage. It was equally clear, though, that only a small proportion of cases, even of these leading cases, were reported during each of the terms in our analysis. The question remains, then, what influences the choice of which cases to cover? There have been others before us who have examined this question empirically, and their work is discussed briefly below. This research, while noteworthy, has been infrequent and limited in a number of ways. We then turn to our own analysis of the factors related to the coverage by the three networks of the cases that were granted certiorari and eventually decided on their merits with full opinions during the 1989 term. Our effort builds on and attempts to overcome many of the limitations in the previous research and has enabled us to understand more precisely how the choice of which cases to report is made by network news personnel.
“Every time Dan Rather says ‘The Supreme Court today upheld … ’ I want to smack him. … He has got to know better. He's been around too long.”
Toni House, Public Information Officer, U.S. Supreme Court
Throughout our narrative we have documented at many junctures that the Supreme Court is a uniquely invisible institution in the eyes of the American public both in a relative as well as in an absolute sense. As Gregory Caldeira notes, numerous studies have demonstrated that “there is only a shallow reservoir of knowledge about … the Court in the mass public. … Few … fulfill the most minimal prerequisites of the role of a knowledgeable and competent citizen vis-à-vis the Court” (1986: 1211). At any given moment if the average American were queried about any decisions the Court had rendered in its current or past term, the questioner would likely come up largely empty. Considerable research documents “that many Americans little recognize or little remember the Court's rulings. On open-ended questions that probe for specific likes or dislikes about Court rulings, only about half (or fewer) … can offer an opinion on even the most prominent Supreme Court decisions” (Marshall, 1989:143). The lack of public information about the Court extends beyond its decisions, per se, to a similar lack of familiarity with the justices who comprise the Court. Thus, in one study, fewer than 10 percent of the public could name the Chief Justice of the United States while, somewhat ironically, more than a quarter of the populace could recognize the name of Judge Wapner of the People's Court television fame (Morin, 1989).
“Unlike anybody else in town, Supreme Court justices don't covet the press.… The Court doesn't leak, the Court doesn't spin. The Court is there, and you make of it what you do. … It's almost like it's another time, it's another era.”
Pete Williams, NBC News
Students of American politics who study the Supreme Court often take as the starting point for their analyses the notion that the Court is a unique and fundamentally “different” kind of institution in our tripartite governmental system, featuring a merging of law and politics that distinguishes it from the American legislative and executive branches as well as from most other national judiciaries. Similarly, many journalists who cover the institution attest to the unique nature of the Supreme Court beat when contrasting it with reporting on the other branches of government. Indeed, ABC News legal correspondent Tim O'Brien suggested that the Court is not, in the final analysis, a “Washington” beat at all: “I don't even think of covering the Supreme Court really as a Washington assignment.… [O]ften, unlike any other beat in town, I will not have any shot in my piece from Washington.… [I]t's a national beat, but not a Washington beat.”
In the following two chapters we will examine the environment in which Supreme Court journalists operate and the world that they inhabit. First, in chapter 2, we will explore the distinctive nature of the Court beat, with particular attention paid to the numerous manifestations of that distinctiveness.