We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Geographic range size and abundance are important determinants of extinction risk in fossil and extant taxa. However, the relationship between these variables and extinction risk has not been tested extensively during evolutionarily “quiescent” times of low extinction and speciation in the fossil record. Here we examine the influence of geographic range size and abundance on extinction risk during the late Paleozoic (Mississippian–Permian), a time of “sluggish” evolution when global rates of origination and extinction were roughly half those of other Paleozoic intervals. Analyses used spatiotemporal occurrences for 164 brachiopod species from the North American midcontinent. We found abundance to be a better predictor of extinction risk than measures of geographic range size. Moreover, species exhibited reductions in abundance before their extinction but did not display contractions in geographic range size. The weak relationship between geographic range size and extinction in this time and place may reflect the relative preponderance of larger-ranged taxa combined with the physiographic conditions of the region that allowed for easy habitat tracking that dampened both extinction and speciation. These conditions led to a prolonged period (19–25 Myr) during which standard macroevolutionary rules did not apply.
An intermediate-depth (1751 m) ice core was drilled at the South Pole between 2014 and 2016 using the newly designed US Intermediate Depth Drill. The South Pole ice core is the highest-resolution interior East Antarctic ice core record that extends into the glacial period. The methods used at the South Pole to handle and log the drilled ice, the procedures used to safely retrograde the ice back to the National Science Foundation Ice Core Facility (NSF-ICF), and the methods used to process and sample the ice at the NSF-ICF are described. The South Pole ice core exhibited minimal brittle ice, which was likely due to site characteristics and, to a lesser extent, to drill technology and core handling procedures.
Oral contraceptive use has been previously associated with an increased risk of suicidal behavior in some, but not all, samples. The use of large, representative, longitudinally-assessed samples may clarify the nature of this potential association.
Methods
We used Swedish national registries to identify women born between 1991 and 1995 (N = 216 702) and determine whether they retrieved prescriptions for oral contraceptives. We used Cox proportional hazards models to test the association between contraceptive use and first observed suicidal event (suicide attempt or death) from age 15 until the end of follow-up in 2014 (maximum age 22.4). We adjusted for covariates, including mental illness and parental history of suicide.
Results
In a crude model, use of combination or progestin-only oral contraceptives was positively associated with suicidal behavior, with hazard ratios (HRs) of 1.73–2.78 after 1 month of use, and 1.25–1.82 after 1 year of use. Accounting for sociodemographic, parental, and psychiatric variables attenuated these associations, and risks declined with increasing duration of use: adjusted HRs ranged from 1.56 to 2.13 1 month beyond the initiation of use, and from 1.19 to 1.48 1 year after initiation of use. HRs were higher among women who ceased use during the observation period.
Conclusions
Young women using oral contraceptives may be at increased risk of suicidal behavior, but risk declines with increased duration of use. Analysis of former users suggests that women susceptible to depression/anxiety are more likely to cease hormonal contraceptive use. Additional studies are necessary to determine whether the observed association is attributable to a causal mechanism.
Alcohol use disorder (AUD) is common and associated with increased risk of suicide.
Aims
To examine healthcare utilisation prior to suicide in persons with AUD in a large population-based cohort, which may reveal opportunities for prevention.
Method
A national cohort study was conducted of 6 947 191 adults in Sweden in 2002, including 256 647 (3.7%) with AUD, with follow-up for suicide through 2015. A nested case–control design examined healthcare utilisation among people with AUD who died by suicide and 10:1 age- and gender-matched controls.
Results
In 86.7 million person-years of follow-up, 15 662 (0.2%) persons died by suicide, including 2601 (1.0%) with AUD. Unadjusted and adjusted relative risks for suicide associated with AUD were 8.15 (95% CI 7.86–8.46) and 2.22 (95% CI 2.11–2.34). Of the people with AUD who died by suicide, 39.7% and 75.6% had a healthcare encounter <2 weeks or <3 months before the index date respectively, compared with 6.3% and 25.4% of controls (adjusted prevalence ratio (PR) and difference (PD), <2 weeks: PR = 3.86, 95% CI 3.50–4.25, PD = 26.4, 95% CI 24.2–28.6; <3 months: PR = 2.03, 95% CI 1.94–2.12, PD = 34.9, 95% CI 32.6–37.1). AUD accounted for more healthcare encounters within 2 weeks of suicide among men than women (P = 0.01). Of last encounters, 48.1% were in primary care and 28.9% were in specialty out-patient clinics, mostly for non-psychiatric diagnoses.
Conclusions
Suicide among persons with AUD is often shortly preceded by healthcare encounters in primary care or specialty out-patient clinics. Encounters in these settings are important opportunities to identify active suicidality and intervene accordingly in patients with AUD.
Background: Hemolysis of blood samples is the leading cause of specimen rejection from hospital laboratories. It contributes to delays in patient care and disposition decisions. Coagulation tests (prothrombin time/international normalized ratio [PT/INR] and activated partial thromboplastin time [aPTT]) are especially problematic for hemolysis in our academic hospital, with at least one sample rejected daily from the emergency department (ED). Aim Statement: We aimed to decrease the monthly rate of hemolyzed coagulation blood samples sent from the ED from a rate of 2.9% (53/1,857) to the best practice benchmark of less than 2% by September 1st, 2019. Measures & Design: Our outcome measure was the rate of hemolyzed coagulation blood samples. Our process measure was the rate of coagulation blood tests sent per 100 ED visits. Our balancing measure was the number of incident reports by clinicians when expected coagulation testing did not occur. We used monthly data for our Statistical Process Control (SPC) charts, as well as Chi square and Mann-Whitney U tests for our before-and-after evaluation. Using the Model for Improvement to develop our project's framework, we used direct observation, broad stakeholder engagement, and process mapping to identify root causes. We enlisted nursing champions to develop our Plan-Do-Study-Act (PDSA) cycles/interventions: 1) educating nurses on hemolysis and coagulation testing; 2) redesigning the peripheral intravenous and blood work supply carts to encourage best practice; and 3) removing PT/INR and aPTT from automatic inclusion in our electronic chest pain bloodwork panel. Evaluation/Results: The average rate of hemolysis remained unchanged from baseline (2.9%, p = 0.83). The average rate of coagulation testing sent per 100 ED visits decreased from 41.5 to 28.8 (absolute decrease 12.7 per 100, p < 0.05), avoiding $4,277 in monthly laboratory costs. The SPC chart of our process measure showed special cause variation with greater than eight points below the centerline. Discussion/Impact: Our project reduced coagulation testing, without changing hemolysis rates. Buy-in from frontline nurses was integral to the project's early success, prior to implementing our electronic approach – a solution ranked higher on the hierarchy of intervention effectiveness – to help sustainability. This resource stewardship project will now be spread to a nearby institution by utilizing similar approaches.
This study investigated the attitudes of medical students towards psychiatry, both as a subject on the medical curriculum and as a career choice. Three separate questionnaires previously validated on medical student populations were administered prior to and immediately following an 8-week clinical training programme. The results indicate that the perception of psychiatry was positive prior to clerkship and became even more so on completion of training. On completion of the clerkship, there was a rise in the proportion of students who indicated that they might choose a career in psychiatry. Attitudes toward psychiatry correlated positively with the psychiatry examination results. Those that intended to specialise in psychiatry achieved significantly higher examination scores in the psychiatry examination.
Background: Emergency Department (ED) communication between patients and clinicians is fraught with challenges. A local survey of 65 ED patients revealed low patient satisfaction with ED communication and resultant patient anxiety. Aim Statement: To increase patient satisfaction with ED communication and to decrease patient anxiety related to lack of ED visit information (primary aims), and to decrease clinician-perceived patient interruptions (secondary aim), each by one point on a 5-point Likert scale over a six-month period. Measures & Design: We performed wide stakeholder engagement, surveyed patients and clinicians, and conducted a patient focus group. An inductive analysis followed by a yield-feasibility-effort grid led to three interventions, introduced through sequential and additive Plan-Do-Study-Act (PDSA) cycles. PDSA 1: clinician communication tool (Acknowledge-Empathize-Inform [AEI] tool), based on survey themes and a literature review, and introduced through a multi-modal education approach. PDSA 2: patient information pamphlets developed with stakeholder input. PDSA 3: new waiting room TV screen with various informational ED-specific videos. Measures were conducted through anonymous surveys: Primary aims towards the end of the patient ED stay, and the secondary aim at the end of the clinician shift. We used Statistical Process Control (SPC) charts with usual special cause variation rules. Two-tailed Mann-Whitney tests were used to assess for statistical significance between means (significance: p < 0.05). Evaluation/Results: Over five months, 232 patient and 104 clinician surveys were collected. Wait times, ED processes, timing of typical steps, and directions were reported as the most important communication gaps, they and were included in the interventions. Patient satisfaction improved from 3.28 (5 being best, all means; n = 65) to 4.15 (n = 59, p < 0.0001). Patient anxiety improved from 2.96 (1 being best; n = 65) to 2.31 (n = 59, p < 0.01). Clinician-perceived interruptions went from 4.33 (1 being best; n = 30) to 4.18 (n = 11, p = 0.98). SPC charts using Likert scales did not show special cause variation. Discussion/Impact: A sequential, additive approach undertaken with pragmatic and low-cost interventions based on both clinician and patient input led to increased patient satisfaction with communication and decreased patient anxiety due to lack of ED visit information after PDSA cycles. These approaches could easily be replicated in other EDs to improve the patient experience.
Vitamin D deficiency is a common occurrence globally, and particularly so in pregnancy. There is conflicting evidence regarding the role of vitamin D during pregnancy in non-skeletal health outcomes for both the mother and the neonate. The aim of this study was to investigate the associations of maternal total 25-hydroxy vitamin D (25OHD) with neonatal anthropometrics and markers of neonatal glycaemia in the Belfast centre of the Hyperglycemia and Adverse Pregnancy Outcome (HAPO) study. Serological samples (n 1585) were obtained from pregnant women in the Royal Jubilee Maternity Hospital, Belfast, Northern Ireland, between 24 and 32 weeks’ gestation as part of the HAPO study. 25OHD concentrations were measured by liquid chromatography tandem-MS. Cord blood and neonatal anthropometric measurements were obtained within 72 h of birth. Statistical analysis was performed. After adjustment for confounders, birth weight standard deviation scores (SDS) and birth length SDS were significantly associated with maternal total 25OHD. A doubling of maternal 25OHD at 28 weeks’ gestation was associated with mean birth weight SDS and mean birth length SDS higher by 0·05 and 0·07, respectively (both, P=0·03). There were no significant associations with maternal 25OHD and other measures of neonatal anthropometrics or markers of neonatal glycaemia. In conclusion, maternal total 25OHD during pregnancy was independently associated with several neonatal anthropometric measurements; however, this association was relatively weak.
Type 2 diabetes plays a major role in racial/ethnic health disparities. We conducted the first study to examine whether multifaceted interventions targeting patients with poorly controlled diabetes (HgbA1c >9%) can reduce racial/ethnic disparities in diabetes control. Among 4595 patients with diabetes at a Federally Qualified Health Center in New York, a higher percentage of blacks (32%) and Hispanics/Latinos (32%) had poorly controlled diabetes than whites (25%) at baseline (prevalence ratio, 1.28; 95% CI, 1.14–1.43; P<0.001). After four years, this percentage was reduced in all groups (blacks, 21%; Hispanics/Latinos, 20%; whites, 20%; P<0.001 for each relative to baseline). Disparities in diabetes control also were significantly reduced (change in disparity relative to whites: blacks, P=0.03; Hispanics/Latinos, P=0.008). In this diverse population, interventions targeting patients with poorly controlled diabetes not only improved diabetes control in all racial/ethnic groups, but significantly reduced disparities. This approach warrants further testing and may help reduce disparities in other populations.
A few studies have examined the association between vitamin D and telomere length, and fewer still have examined the relationship in black or male populations. We investigated the cross-sectional association between the vitamin D metabolite 25-hydroxyvitamin D (25(OH)D) concentration in plasma and relative leucocyte telomere length (LTL) in 1154 US radiologic technologists who were 48–93 years old (373 white females, 278 white males, 338 black females, 165 black males). Plasma 25(OH)D concentration was measured by the chemiluminescence immunoassay, and relative LTL was measured by quantitative PCR. Logistic regression was used to obtain OR and 95 % CI for long v. short (based on median) LTL in relation to continuous 25(OH)D, quartiles of 25(OH)D and 25(OH)D deficiency. We found no significant association between continuous 25(OH)D and long LTL in all participants (Ptrend=0·440), nor in white females (Ptrend=0·845), white males (Ptrend=0·636), black females (Ptrend=0·967) or black males (Ptrend=0·484). Vitamin D deficiency (defined as 25(OH)D<30 nmol/l), however, was significantly associated with short LTL in whites (P=0·024), but not in other groups. In this population, we found little evidence to support associations between 25(OH)D and long LTL over the entire range of 25(OH)D in the overall study population or by sex and race.
Timely morbidity surveillance of sheltered populations is crucial for identifying and addressing their immediate needs, and accurate surveillance allows us to better prepare for future disasters. However, disasters often create travel and communication challenges that complicate the collection and transmission of surveillance data. We describe a surveillance project conducted in New Jersey shelters after Hurricane Sandy, which occurred in November 2012, that successfully used cellular phones for remote real-time reporting. This project demonstrated that, when supported with just-in-time morbidity surveillance training, cellular phone reporting was a successful, sustainable, and less labor-intensive methodology than in-person shelter visits to capture morbidity data from multiple locations and opened a two-way communication channel with shelters. (Disaster Med Public Health Preparedness. 2015;10:525–528)
There is evidence for health benefits from ‘Palaeolithic’ diets; however, there are a few data on the acute effects of rationally designed Palaeolithic-type meals. In the present study, we used Palaeolithic diet principles to construct meals comprising readily available ingredients: fish and a variety of plants, selected to be rich in fibre and phyto-nutrients. We investigated the acute effects of two Palaeolithic-type meals (PAL 1 and PAL 2) and a reference meal based on WHO guidelines (REF), on blood glucose control, gut hormone responses and appetite regulation. Using a randomised cross-over trial design, healthy subjects were given three meals on separate occasions. PAL2 and REF were matched for energy, protein, fat and carbohydrates; PAL1 contained more protein and energy. Plasma glucose, insulin, glucagon-like peptide-1 (GLP-1), glucose-dependent insulinotropic peptide (GIP) and peptide YY (PYY) concentrations were measured over a period of 180 min. Satiation was assessed using electronic visual analogue scale (EVAS) scores. GLP-1 and PYY concentrations were significantly increased across 180 min for both PAL1 (P= 0·001 and P< 0·001) and PAL2 (P= 0·011 and P= 0·003) compared with the REF. Concomitant EVAS scores showed increased satiety. By contrast, GIP concentration was significantly suppressed. Positive incremental AUC over 120 min for glucose and insulin did not differ between the meals. Consumption of meals based on Palaeolithic diet principles resulted in significant increases in incretin and anorectic gut hormones and increased perceived satiety. Surprisingly, this was independent of the energy or protein content of the meal and therefore suggests potential benefits for reduced risk of obesity.
We briefly describe 2 systems that provided disaster-related mortality surveillance during and after Hurricane Sandy in New York City, namely, the New York City Health Department Electronic Death Registration System (EDRS) and the American Red Cross paper-based tracking system.
Methods
Red Cross fatality data were linked with New York City EDRS records by using decedent name and date of birth. We analyzed cases identified by both systems for completeness and agreement across selected variables and the time interval between death and reporting in the system.
Results
Red Cross captured 93% (41/44) of all Sandy-related deaths; the completeness and quality varied by item, and timeliness was difficult to determine. The circumstances leading to death captured by Red Cross were particularly useful for identifying reasons individuals stayed in evacuation zones. EDRS variables were nearly 100% complete, and the median interval between date of death and reporting was 6 days (range: 0-43 days).
Conclusions
Our findings indicate that a number of steps have the potential to improve disaster-related mortality surveillance, including updating Red Cross surveillance forms and electronic databases to enhance timeliness assessments, greater collaboration across agencies to share and use data for public health preparedness, and continued expansion of electronic death registration systems. (Disaster Med Public Health Preparedness. 2014;8:489-491)
Individual differences in sleep patterns of children may have developmental origins. In the present study, two factors known to influence behavioural development, monoamine oxidase A (MAOA) genotype and prenatal Fe-deficient (ID) diet, were examined for their influences on sleep patterns in juvenile rhesus monkeys. Sleep patterns were assessed based on a threshold for inactivity as recorded by activity monitors. Pregnant monkeys were fed diets containing either 100 parts per million (ppm) Fe (Fe sufficient, IS) or 10 ppm Fe (ID). At 3–4 months of age, male offspring were genotyped for polymorphisms of the MAOA gene that lead to high or low transcription. At 1 and 2 years of age, sleep patterns were assessed. Several parameters of sleep architecture changed with age. At 1 year of age, monkeys with the low-MAOA genotype demonstrated a trend towards more sleep episodes at night compared with those with the high-MAOA genotype. When monkeys reached 2 years of age, prenatal ID reversed this trend; ID in the low-MAOA group resulted in sleep fragmentation, more awakenings at night and more sleep episodes during the day when compared with prenatal IS in this genotype. The ability to consolidate sleep during the dark cycle was disrupted by prenatal ID, specifically in monkeys with the low-MAOA genotype.
Anti-black prejudice affects how some citizens evaluate black candidates. What does it take to reduce the role of prejudice in these evaluations? Using logical implications of relevant psychological phenomena, this article shows that repeated exposure to counter-stereotypical information is insufficient to reduce evaluative prejudice. Instead, citizens must associate this prejudice with adverse effects for themselves in contexts that induce them to rethink their existing racial beliefs. These findings explain important disagreements in empirical prejudice research, as only some empirical research designs supply the conditions for prejudice reduction predicted here. This study also clarifies why similarly situated citizens react so differently to counter-stereotypical information. In sum, we find that prejudice change is possible, but in a far narrower set of circumstances than many scholars claim.