To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Studies suggest that alcohol consumption and alcohol use disorders have distinct genetic backgrounds.
We examined whether polygenic risk scores (PRS) for consumption and problem subscales of the Alcohol Use Disorders Identification Test (AUDIT-C, AUDIT-P) in the UK Biobank (UKB; N = 121 630) correlate with alcohol outcomes in four independent samples: an ascertained cohort, the Collaborative Study on the Genetics of Alcoholism (COGA; N = 6850), and population-based cohorts: Avon Longitudinal Study of Parents and Children (ALSPAC; N = 5911), Generation Scotland (GS; N = 17 461), and an independent subset of UKB (N = 245 947). Regression models and survival analyses tested whether the PRS were associated with the alcohol-related outcomes.
In COGA, AUDIT-P PRS was associated with alcohol dependence, AUD symptom count, maximum drinks (R2 = 0.47–0.68%, p = 2.0 × 10−8–1.0 × 10−10), and increased likelihood of onset of alcohol dependence (hazard ratio = 1.15, p = 4.7 × 10−8); AUDIT-C PRS was not an independent predictor of any phenotype. In ALSPAC, the AUDIT-C PRS was associated with alcohol dependence (R2 = 0.96%, p = 4.8 × 10−6). In GS, AUDIT-C PRS was a better predictor of weekly alcohol use (R2 = 0.27%, p = 5.5 × 10−11), while AUDIT-P PRS was more associated with problem drinking (R2 = 0.40%, p = 9.0 × 10−7). Lastly, AUDIT-P PRS was associated with ICD-based alcohol-related disorders in the UKB subset (R2 = 0.18%, p < 2.0 × 10−16).
AUDIT-P PRS was associated with a range of alcohol-related phenotypes across population-based and ascertained cohorts, while AUDIT-C PRS showed less utility in the ascertained cohort. We show that AUDIT-P is genetically correlated with both use and misuse and demonstrate the influence of ascertainment schemes on PRS analyses.
The flat oyster Ostrea edulis has declined significantly in European waters since the 1850s as a result of anthropogenic activity. Ostrea edulis was designated a UK Biodiversity Action Plan Species and Habitat in 1995, and as a Feature of Conservation Importance (FOCI) within the UK Marine & Coastal Access Act 2009. To promote the recovery of oyster beds, a greater understanding of its abundance and distribution is required. Distribution of O. edulis across the proposed Blackwater, Crouch, Roach and Colne MCZ in Essex was determined between 2008 and 2012. Ostrea edulis were present in four estuary zones; with highest sample abundance in the Blackwater and Ray Sand zones. Size structure of populations varied, with the Ray Sand and Colne zones showing a significant lack of individuals with shell height <39 mm. Ostrea edulis occurred in highest number on shell substratum, followed by silty sediments. There were no significant associations between O. edulis abundance or size structure with water column Chl a, suspended solids, oxygen, nitrate or ammonium concentrations, temperature or pH. Highest abundance and most equitable population shell-size distribution for O. edulis were located within, or adjacent to, actively managed aquaculture zones. This suggests that traditional seabed management contributed to the maintenance or recovery of the species of conservation concern. Demonstration that the Essex estuaries were a stronghold for Ostrea edulis in the southern North sea area led to the designation of the Blackwater, Crouch, Roach and Colne estuaries Marine Conservation Zone in 2013.
Apex predators play a critical role in maintaining the health of ecosystems but are highly susceptible to habitat degradation and loss caused by land-use changes, and to anthropogenic mortality. The leopard Panthera pardus is the last free-roaming large carnivore in the Western Cape province, South Africa. During 2011–2015, we carried out a camera-trap survey across three regions covering c. 30,000 km2 of the Western Cape. Our survey comprised 151 camera sites sampling nearly 14,000 camera-trap nights, resulting in the identification of 71 individuals. We used two spatially explicit capture–recapture methods (R programmes secr and SPACECAP) to provide a comprehensive density analysis capable of incorporating environmental and anthropogenic factors. Leopard density was estimated to be 0.35 and 1.18 leopards/100 km2, using secr and SPACECAP, respectively. Leopard population size was predicted to be 102–345 individuals for our three study regions. With these estimates and the predicted available leopard habitat for the province, we extrapolated that the Western Cape supports an estimated 175–588 individuals. Providing a comprehensive baseline population density estimate is critical to understanding population dynamics across a mixed landscape and helping to determine the most appropriate conservation actions. Spatially explicit capture–recapture methods are unbiased by edge effects and superior to traditional capture–mark–recapture methods when estimating animal densities. We therefore recommend further utilization of robust spatial methods as they continue to be advanced.
Introduction: Emergency Department (ED) opioid prescribing has been linked to long-term use and dependence. Small packets of opioid medications are sometimes prescribed at discharge, i.e. ‘To-Go’, in an attempt to treat pain but avoid unintended consequences. The extent of this practice and its associated risks are not fully understood. This study's objective was to describe the use of ‘To-Go’ opioids in a large urban center. Methods: Multicenter linked administrative databases were used to recruit an observational cohort. The referral population was comprised of all patients discharged from a Calgary ED in 2016 (four hospitals) with an arrival pain score greater than 0. We first described this population and then performed a multivariable analysis to assess for predictors of ‘To-Go’ opioids. ‘To-Go’ opioids were either Tylenol-Codeine or Tylenol-Oxycodone. Results: A total of 88,855 patients were recruited. The majority were female (57%) and the average age was 44.5 yrs. Abdominal pain was the most frequent complaint (22.1%) followed by extremity (18.3%) and cardiac pain (8.0%). Overall, 2,736 patients (3.1%) received an opioid ‘To-Go’ with significant variation in prescribing rates across hospitals (1.8-5% Chi2 p < 0.05). Logistic regression (covariates: age, sex, CTAS, pain score, type of pain, hospital, ED opioid, length of stay) revealed that receiving an opioid (IV or PO) prior to discharge was the strongest predictor of ‘To-Go’ opioid (OR 6.4 [5.9-7.0]). Hospital (OR 1.4 [1.3-1.4]) and male sex (OR 1.2 [1.1-1.3]) also emerged as predictors, whereas age over 65 decreased the odds of ‘To-Go’ opioid (OR 0.8 [0.6-0.9]). Hospital-specific ORs ranged from 1.3-2.7. Conclusion: In comparable patient populations some hospitals are more likely than others to provide a short course of opioids at discharge. This difference is not explained by patient demographics, pain profiles, or medications prior to discharge. The reasons for this variation are unclear but it underscores the need to determine the risks of ED opioid exposures and develop clear evidence-based prescribing guidelines.
Patients with chronic obstructive pulmonary disease (COPD) who experience acute exacerbations usually require treatment with oral steroids or antibiotics, depending on the etiology of the exacerbation. Current management is based on clinician's assessment and judgement, which lacks diagnostic accuracy and results in overtreatment. A test to guide these decisions in primary care is in development. We developed an early decision model to evaluate the cost-effectiveness of this treatment stratification test in the primary care setting in the United Kingdom.
A combined decision tree and Markov model was developed of COPD progression and the exacerbation care pathway. Sensitivity analysis was carried out to guide technology development and inform evidence generation requirements.
The base case test strategy cost GBP 423 (USD 542) less and resulted in a health gain of 0.15 quality-adjusted life-years per patient compared with not testing. Testing reduced antibiotic prescriptions by 30 percent, potentially lowering the risk of antimicrobial resistance developing. In sensitivity analysis, the result depended on the clinical effects of treating patients according to the test result, as opposed to treating according to clinical judgement alone, for which there is limited evidence. The results were less sensitive to the accuracy of the test.
Testing may be cost-saving in primary care, but this requires robust evidence on whether test-guided treatment is effective. High quality evidence on the clinical utility of testing is required for early modeling of diagnostic tests generally.
Introduction: The 72-hr unscheduled return visit (URV) of an emergency department (ED) patient is often used as a key performance indicator in Emergency Medicine. Patients with unscheduled return visits and admission to hospital (URVA) may represent a distinct subgroup of URVs compared to unscheduled return visits with no admission (URVNA). Methods: A retrospective cohort study of all 72-hr URVs in adults across nine EDs in the Edmonton Zone (EZ) over a one-year period (Jan 1 2015 Dec 31 2015) was performed using ED information system data. URVA and URVNA populations were compared and a multivariable analysis identified predictors of URVA. Results: Analysis of 40,870 total URV records, including 3,363 URVAs, revealed predictors of URVA on the index visit including older age (>65 yrs, OR 3.6), fewer annual ED visits (<4 visits, OR 2.0), higher disease acuity (CTAS 2, OR 2.6), gastrointestinal presenting complaint (OR 2.2), presenting to a large referral hospital (OR 1.4), and more hours spent in the ED (>12 hours, OR 2.0). A decrease in CTAS score (increase in disease acuity) upon return visit was also a risk factor (-1 CTAS level, OR 2.6). ED crowding at the index visit, as indicated by occupancy level, was not a predictor. Conclusion: We demonstrate that URVA patients comprise a distinct subgroup of 72-hr URVs across an entire health region. Risk factors for URVA are present at the index visit suggesting that patients at high risk for URVA may be identifiable prior to admission.
The aim of this study was to describe patient level costing methods and develop a database of healthcare resource use and cost in patients with AHF receiving ventricular assist device (VAD) therapy.
Patient level micro-costing was used to identify documented activity in the years preceding and following VAD implantation, and preceding heart transplant for a cohort of seventy-seven consecutive patients listed for heart transplantation (2009–12). Clinician interviews verified activity, established time resource required for each activity, and added additional undocumented activities. Costs were sourced from the general ledger, salary, stock price, pharmacy formulary data, and from national medical benefits and prostheses lists. Linked administrative data analyses of activity external to the implanting institution, used National Weighted Activity Units (NWAU), 2014 efficient price, and admission complexity cost weights and were compared with micro-costed data for the implanting admission.
The database produced includes patient level activity and costs associated with the seventy-seven patients across thirteen resource areas including hospital activity external to the implanting center. The median cost of the implanting admission using linked administrative data was $246,839 (interquartile range [IQR] $246,839–$271,743), versus $270,716 (IQR $211,740–$378,482) for the institutional micro-costing (p = .08).
Linked administrative data provides a useful alternative for imputing costs external to the implanting center, and combined with institutional data can illuminate both the pathways to transplant referral and the hospital activity generated by patients experiencing the terminal phases of heart failure in the year before transplant, cf-VAD implant, or death.
Evidence regarding the seasonality of urinary tract infection (UTI) consultations in primary care is conflicting and methodologically poor. To our knowledge, this is the first study to determine whether this seasonality exists in the UK, identify the peak months and describe seasonality by age. The monthly number of UTI consultations (N = 992 803) and nitrofurantoin and trimethoprim prescriptions (N = 1 719 416) during 2008–2015 was extracted from The Health Improvement Network (THIN), a large nationally representative UK dataset of electronic patient records. Negative binomial regression models were fitted to these data to investigate seasonal fluctuations by age group (14–17, 18–24, 25–45, 46–69, 70–84, 85+) and by sex, accounting for a change in the rate of UTI over the study period. A September to November peak in UTI consultation incidence was observed for ages 14–69. This seasonality progressively faded in older age groups and no seasonality was found in individuals aged 85+, in whom UTIs were most common. UTIs were rare in males but followed a similar seasonal pattern than in females. We show strong evidence of an autumnal seasonality for UTIs in individuals under 70 years of age and a lack of seasonality in the very old. These findings should provide helpful information when interpreting surveillance reports and the results of interventions against UTI.
The Functional Visual Field (FVF) offers explanatory power. To us, it relates to existing literature on the flexibility of attentional focus in visual search and reading (Eriksen & St. James 1986; McConkie & Rayner 1975). The target article promotes reflection on existing findings. Here we consider the FVF as a mechanism in the Prevalence Effect (PE) in visual search.
Many adults with autism spectrum disorder (ASD) remain undiagnosed. Specialist assessment clinics enable the detection of these cases, but such services are often overstretched. It has been proposed that unnecessary referrals to these services could be reduced by prioritizing individuals who score highly on the Autism-Spectrum Quotient (AQ), a self-report questionnaire measure of autistic traits. However, the ability of the AQ to predict who will go on to receive a diagnosis of ASD in adults is unclear.
We studied 476 adults, seen consecutively at a national ASD diagnostic referral service for suspected ASD. We tested AQ scores as predictors of ASD diagnosis made by expert clinicians according to International Classification of Diseases (ICD)-10 criteria, informed by the Autism Diagnostic Observation Schedule-Generic (ADOS-G) and Autism Diagnostic Interview-Revised (ADI-R) assessments.
Of the participants, 73% received a clinical diagnosis of ASD. Self-report AQ scores did not significantly predict receipt of a diagnosis. While AQ scores provided high sensitivity of 0.77 [95% confidence interval (CI) 0.72–0.82] and positive predictive value of 0.76 (95% CI 0.70–0.80), the specificity of 0.29 (95% CI 0.20–0.38) and negative predictive value of 0.36 (95% CI 0.22–0.40) were low. Thus, 64% of those who scored below the AQ cut-off were ‘false negatives’ who did in fact have ASD. Co-morbidity data revealed that generalized anxiety disorder may ‘mimic’ ASD and inflate AQ scores, leading to false positives.
The AQ's utility for screening referrals was limited in this sample. Recommendations supporting the AQ's role in the assessment of adult ASD, e.g. UK NICE guidelines, may need to be reconsidered.
The ultimate goal of upper-limb rehabilitation after stroke is to promote real-world use, that is, use of the paretic upper-limb in everyday activities outside the clinic or laboratory. Although real-world use can be collected through self-report questionnaires, an objective indicator is preferred. Accelerometers are a promising tool. The current paper aims to explore the feasibility of accelerometers to measure upper-limb use after stroke and discuss the translation of this measurement tool into clinical practice. Accelerometers are non-invasive, wearable sensors that measure movement in arbitrary units called activity counts. Research to date indicates that activity counts are a reliable and valid index of upper-limb use. While most accelerometers are unable to distinguish between the type and quality of movements performed, recent advancements have used accelerometry data to produce clinically meaningful information for clinicians, patients, family and care givers. Despite this, widespread uptake in research and clinical environments remains limited. If uptake was enhanced, we could build a deeper understanding of how people with stroke use their arm in real-world environments. In order to facilitate greater uptake, however, there is a need for greater consistency in protocol development, accelerometer application and data interpretation.
While more and more long-period giant planets are discovered by direct imaging, the distribution of planets at these separations (≳5 AU) has remained largely uncertain, especially compared to planets in the inner regions of solar systems probed by RV and transit techniques. The low frequency, the detection challenges, and heterogeneous samples make determining the mass and orbit distributions of directly imaged planets at the end of a survey difficult. By utilizing Monte Carlo methods that incorporate the age, distance, and spectral type of each target, we can use all stars in the survey, not just those with detected planets, to learn about the underlying population. We have produced upper limits and direct measurements of the frequency of these planets with the most recent generation of direct imaging surveys. The Gemini NICI Planet-Finding Campaign observed 220 young, nearby stars at a median H-band contrast of 14.5 magnitudes at 1”, representing the largest, deepest search for exoplanets by the completion of the survey. The Gemini Planet Imager Exoplanet Survey is in the process of surveying 600 stars, pushing these contrasts to a few tenths of an arcsecond from the star. With the advent of large surveys (many hundreds of stars) using advanced planet-imagers we gain the ability to move beyond measuring the frequency of wide-separation giant planets and to simultaneously determine the distribution as a function of planet mass, semi-major axis, and stellar mass, and so directly test models of planet formation and evolution.
Variation in human cognitive ability is of consequence to a large number of health and social outcomes and is substantially heritable. Genetic linkage, genome-wide association, and copy number variant studies have investigated the contribution of genetic variation to individual differences in normal cognitive ability, but little research has considered the role of rare genetic variants. Exome sequencing studies have already met with success in discovering novel trait-gene associations for other complex traits. Here, we use exome sequencing to investigate the effects of rare variants on general cognitive ability. Unrelated Scottish individuals were selected for high scores on a general component of intelligence (g). The frequency of rare genetic variants (in n = 146) was compared with those from Scottish controls (total n = 486) who scored in the lower to middle range of the g distribution or on a proxy measure of g. Biological pathway analysis highlighted enrichment of the mitochondrial inner membrane component and apical part of cell gene ontology terms. Global burden analysis showed a greater total number of rare variants carried by high g cases versus controls, which is inconsistent with a mutation load hypothesis whereby mutations negatively affect g. The general finding of greater non-synonymous (vs. synonymous) variant effects is in line with evolutionary hypotheses for g. Given that this first sequencing study of high g was small, promising results were found, suggesting that the study of rare variants in larger samples would be worthwhile.
In England, hospital admissions for severe staphylococcal boils and abscesses trebled between 1989 and 2004. We investigated this trend using routine data from primary and secondary care. We used The Health Improvement Network (THIN), a large primary-care database and national data on hospital admissions from Hospital Episode Statistics (HES). Time trends in the incidence of primary-care consultations for boils and abscesses were estimated for 1995–2010. HES data were used to calculate age-standardized hospital admission rates for boils, abscesses and cellulitis. The incidence of boil or abscess was 450 [95% confidence interval (CI) 447–452] per 100 000 person-years and increased slightly over the study period (incidence rate ratio 1·005, 95% CI 1·004–1·007). The rate of repeat consultation for a boil or abscess increased from 66 (95% CI 59–73) per 100 000 person-years in 1995 to peak at 97 (95% CI 94–101) per 100 000 person-years in 2006, remaining stable thereafter. Hospital admissions for abscesses, carbuncles, furuncles and cellulitis almost doubled, from 123 admissions per 100 000 in 1998/1999 to 236 admissions per 100 000 in 2010/2011. Rising hospitalization and recurrence rates set against a background of stable community incidence suggests increased disease severity. Patients may be experiencing more severe and recurrent staphylococcal skin disease with limited treatment options.
Ramalina celastri is a highly variable, widely distributed pan-subtropical lichen species. In Australasia the species had been separated into two subspecies; R. celastri subsp. celastri and R. celastri subsp. ovalis. This study compares morphological variation, substratum preference and sequences of the internal transcribed spacer (ITS) and intergenic spacer (IGS) regions of ribosomal DNA from a range of specimens from New Zealand and one from Australia. Bayesian and ML trees generated using the sequence data form two well-supported clades corresponding to the two previously recognized subspecies. Molecular, morphological and geographical differences support the recognition of R. ovalis at the species rank.
An enigma of deep-sea biodiversity research is that the abyss with its low productivity and densities appears to have a biodiversity similar to that of shallower depths. This conceptualization of similarity is based mainly on per-sample estimates (point diversity, within-habitat, or α-diversity). Here, we use a measure of between-sample within-community diversity (β1H) to examine benthic foraminiferal diversity between 333 stations within 49 communties from New Zealand, the South Atlantic, the Gulf of Mexico, the Norwegian Sea, and the Arctic. The communities are grouped into two depth categories: 200–1500 m and >1500 m. β1H diversity exhibits no evidence of regional differences. Instead, higher values at shallower depths are observed worldwide. At depths of >1500 m the average β1H is zero, indicating stasis or no biodiversity gradient. The difference in β1H-diversity explains why, despite species richness often being greater per sample at deeper depths, the total number of species is greater at shallower depths. The greater number of communities and higher rate of evolution resulting in shorter species durations at shallower depths is also consistent with higher β1H values.
Studies in North America and Europe indicate that the prevalence of blood-borne viruses (BBVs) is elevated in individuals with severe mental illness; there are no comparable data for the UK. We offered routine testing for HIV, and hepatitis B and C in an inner-London in-patient psychiatric unit as a service improvement. Of the patients approached 83% had mental capacity to provide informed consent for testing and 66% of patients offered testing accepted. Although it was not our objective to establish the prevalence of BBVs, 18% of patients had serological evidence of a current or previous BBV infection, we found that offering routine testing in an in-patient psychiatric setting is both practical and acceptable to patients.