To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Syriac term yaṣrā, “inclination,” “urge,” “wilfulness,” and its use in Syriac texts, has not until recently been the subject of any detailed study, and this is perhaps surprising not only because of its interest for an understanding of early Syriac Christian thought, but also for its potential contribution to discussions of the origins and development of Jewish concepts of the yeṣer.
Compulsory admission procedures of patients with mental disorders vary between countries in Europe. The Ethics Committee of the European Psychiatric Association (EPA) launched a survey on involuntary admission procedures of patients with mental disorders in 40 countries to gather information from all National Psychiatric Associations that are members of the EPA to develop recommendations for improving involuntary admission processes and promote voluntary care.
The survey focused on legislation of involuntary admissions and key actors involved in the admission procedure as well as most common reasons for involuntary admissions.
We analyzed the survey categorical data in themes, which highlight that both medical and legal actors are involved in involuntary admission procedures.
We conclude that legal reasons for compulsory admission should be reworded in order to remove stigmatization of the patient, that raising awareness about involuntary admission procedures and patient rights with both patients and family advocacy groups is paramount, that communication about procedures should be widely available in lay-language for the general population, and that training sessions and guidance should be available for legal and medical practitioners. Finally, people working in the field need to be constantly aware about the ethical challenges surrounding compulsory admissions.
Why patients with psychosis use cannabis remains debated. The self-medication hypothesis has received some support but other evidence points towards an alleviation of dysphoria model. This study investigated the reasons for cannabis use in first-episode psychosis (FEP) and whether strength in their endorsement changed over time.
FEP inpatients and outpatients at the South London and Maudsley, Oxleas and Sussex NHS Trusts UK, who used cannabis, rated their motives at baseline (n = 69), 3 months (n = 29) and 12 months (n = 36). A random intercept model was used to test the change in strength of endorsement over the 12 months. Paired-sample t-tests assessed the differences in mean scores between the five subscales on the Reasons for Use Scale (enhancement, social motive, coping with unpleasant affect, conformity and acceptance and relief of positive symptoms and side effects), at each time-point.
Time had a significant effect on scores when controlling for reason; average scores on each subscale were higher at baseline than at 3 months and 12 months. At each time-point, patients endorsed ‘enhancement’ followed by ‘coping with unpleasant affect’ and ‘social motive’ more highly for their cannabis use than any other reason. ‘Conformity and acceptance’ followed closely. ‘Relief of positive symptoms and side effects’ was the least endorsed motive.
Patients endorsed their reasons for use at 3 months and 12 months less strongly than at baseline. Little support for the self-medication or alleviation of dysphoria models was found. Rather, patients rated ‘enhancement’ most highly for their cannabis use.
Dietary Zn has significant impacts on the growth and development of breeding rams. The objectives of this study were to evaluate the effects of dietary Zn source and concentration on serum Zn concentration, growth performance, wool traits and reproductive performance in rams. Forty-four Targhee rams (14 months; 68 ± 18 kg BW) were used in an 84-day completely randomized design and were fed one of three pelleted dietary treatments: (1) a control without fortified Zn (CON; n = 15; ~1 × NRC); (2) a diet fortified with a Zn amino acid complex (ZnAA; n = 14; ~2 × NRC) and (3) a diet fortified with ZnSO4 (ZnSO4; n = 15; ~2 × NRC). Growth and wool characteristics measured throughout the course of the study were BW, average daily gain (ADG), dry matter intake (DMI), feed efficiency (G : F), longissimus dorsi muscle depth (LMD), back fat (BF), wool staple length (SL) and average fibre diameter (AFD). Blood was collected from each ram at four time periods to quantify serum Zn and testosterone concentrations. Semen was collected 1 to 2 days after the trial was completed. There were no differences in BW (P = 0.45), DMI (P = 0.18), LMD (P = 0.48), BF (P = 0.47) and AFD (P = 0.9) among treatment groups. ZnSO4 had greater (P ≤ 0.03) serum Zn concentrations compared with ZnAA and CON treatments. Rams consuming ZnAA had greater (P ≤ 0.03) ADG than ZnSO4 and CON. There tended to be differences among groups for G : F (P = 0.06), with ZnAA being numerically greater than ZnSO4 and CON. Wool staple length regrowth was greater (P < 0.001) in ZnSO4 and tended to be longer (P = 0.06) in ZnAA treatment group compared with CON. No differences were observed among treatments in scrotal circumference, testosterone, spermatozoa concentration within ram semen, % motility, % live sperm and % sperm abnormalities (P ≥ 0.23). Results indicated beneficial effects of feeding increased Zn concentrations to developing Targhee rams, although Zn source elicited differential responses in performance characteristics measured.
The COllaborative project of Development of Anthropometrical measures in Twins (CODATwins) project is a large international collaborative effort to analyze individual-level phenotype data from twins in multiple cohorts from different environments. The main objective is to study factors that modify genetic and environmental variation of height, body mass index (BMI, kg/m2) and size at birth, and additionally to address other research questions such as long-term consequences of birth size. The project started in 2013 and is open to all twin projects in the world having height and weight measures on twins with information on zygosity. Thus far, 54 twin projects from 24 countries have provided individual-level data. The CODATwins database includes 489,981 twin individuals (228,635 complete twin pairs). Since many twin cohorts have collected longitudinal data, there is a total of 1,049,785 height and weight observations. For many cohorts, we also have information on birth weight and length, own smoking behavior and own or parental education. We found that the heritability estimates of height and BMI systematically changed from infancy to old age. Remarkably, only minor differences in the heritability estimates were found across cultural–geographic regions, measurement time and birth cohort for height and BMI. In addition to genetic epidemiological studies, we looked at associations of height and BMI with education, birth weight and smoking status. Within-family analyses examined differences within same-sex and opposite-sex dizygotic twins in birth size and later development. The CODATwins project demonstrates the feasibility and value of international collaboration to address gene-by-exposure interactions that require large sample sizes and address the effects of different exposures across time, geographical regions and socioeconomic status.
We have observed the G23 field of the Galaxy AndMass Assembly (GAMA) survey using the Australian Square Kilometre Array Pathfinder (ASKAP) in its commissioning phase to validate the performance of the telescope and to characterise the detected galaxy populations. This observation covers ~48 deg2 with synthesised beam of 32.7 arcsec by 17.8 arcsec at 936MHz, and ~39 deg2 with synthesised beam of 15.8 arcsec by 12.0 arcsec at 1320MHz. At both frequencies, the root-mean-square (r.m.s.) noise is ~0.1 mJy/beam. We combine these radio observations with the GAMA galaxy data, which includes spectroscopy of galaxies that are i-band selected with a magnitude limit of 19.2. Wide-field Infrared Survey Explorer (WISE) infrared (IR) photometry is used to determine which galaxies host an active galactic nucleus (AGN). In properties including source counts, mass distributions, and IR versus radio luminosity relation, the ASKAP-detected radio sources behave as expected. Radio galaxies have higher stellar mass and luminosity in IR, optical, and UV than other galaxies. We apply optical and IR AGN diagnostics and find that they disagree for ~30% of the galaxies in our sample. We suggest possible causes for the disagreement. Some cases can be explained by optical extinction of the AGN, but for more than half of the cases we do not find a clear explanation. Radio sources aremore likely (~6%) to have an AGN than radio quiet galaxies (~1%), but the majority of AGN are not detected in radio at this sensitivity.
Shiga toxin-producing Escherichia coli (STEC) infection can cause serious illness including haemolytic uraemic syndrome. The role of socio-economic status (SES) in differential clinical presentation and exposure to potential risk factors amongst STEC cases has not previously been reported in England. We conducted an observational study using a dataset of all STEC cases identified in England, 2010–2015. Odds ratios for clinical characteristics of cases and foodborne, waterborne and environmental risk factors were estimated using logistic regression, stratified by SES, adjusting for baseline demographic factors. Incidence was higher in the highest SES group compared to the lowest (RR 1.54, 95% CI 1.19–2.00). Odds of Accident and Emergency attendance (OR 1.35, 95% CI 1.10–1.75) and hospitalisation (OR 1.71, 95% CI 1.36–2.15) because of illness were higher in the most disadvantaged compared to the least, suggesting potential lower ascertainment of milder cases or delayed care-seeking behaviour in disadvantaged groups. Advantaged individuals were significantly more likely to report salad/fruit/vegetable/herb consumption (OR 1.59, 95% CI 1.16–2.17), non-UK or UK travel (OR 1.76, 95% CI 1.40–2.27; OR 1.85, 95% CI 1.35–2.56) and environmental exposures (walking in a paddock, OR 1.82, 95% CI 1.22–2.70; soil contact, OR 1.52, 95% CI 2.13–1.09) suggesting other unmeasured risks, such as person-to-person transmission, could be more important in the most disadvantaged group.
An unexpected increase in gastroenteritis cases was reported by healthcare workers on the KwaZulu-Natal Coast, South Africa, January 2017 with >600 cases seen over a 3-week period. A case–control study was conducted to identify the source and risk factors associated with the outbreak so as to recommend control and prevention measures. Record review identified cases and controls and structured-telephonic interviews were conducted to obtain exposure history. Stool specimens were collected from 20 cases along with environmental samples and both screened for enteric pathogens. A total of 126 cases and 62 controls were included in the analysis. The odds of developing gastroenteritis were 6.0 times greater among holiday makers than residents (95% confidence interval (CI) 2.0–17.7). Swimming in the lagoon increased the odds of developing gastroenteritis by 3.3 times (95% CI 1.06–10.38). Lagoon water samples tested positive for norovirus (NoV) GI.6, GII.3 and GII.6, astrovirus and rotavirus. Eleven (55%) stool specimens were positive for NoV with eight genotyped as GI.1 (n = 2), GI.5 (n = 3), GI.6 (n = 2), and GI.7 (n = 1). A reported sewage contamination event impacting the lagoon was the likely source with person-to-person spread perpetuating the outbreak. Restriction to swimming in the lagoon was apparently ineffective at preventing the outbreak, possibly due to inadequate enforcement, communication and signage strategies.
We present deep low radio frequency (230-470 MHz) observations from the Karl G. Jansky Very Large Array of the Perseus cluster, probing the non-thermal emission from the old particle population of the AGN outflows. Our observations of this nearby relaxed cool core cluster have revealed a multitude of new structures associated with the mini-halo, extending to hundreds of kpc in size. Its irregular morphology seems to have been influenced both by the AGN activity and by the sloshing motion of the cluster’ gas. In addition, it has a filamentary structure similar to that seen in radio relics found in merging clusters. These results illustrate the high-quality images that can be obtained with the new JVLA at low radio-frequencies.
Introduction: Point of care ultrasound (PoCUS) is an established tool in the initial management of patients with undifferentiated hypotension in the emergency department (ED). While PoCUS protocols have been shown to improve early diagnostic accuracy, there is little published evidence for any mortality benefit. We report the findings from our international multicenter randomized controlled trial, assessing the impact of a PoCUS protocol on survival and key clinical outcomes. Methods: Recruitment occurred at 7 centres in North America (4) and South Africa (3). Scans were performed by PoCUS-trained physicians. Screening at triage identified patients (SBP<100 or shock index>1), randomized to PoCUS or control (standard care and no PoCUS) groups. Demographics, clinical details and study findings were collected prospectively. Initial and secondary diagnoses were recorded at 0 and 60 minutes, with ultrasound performed in the PoCUS group prior to secondary assessment. The primary outcome measure was 30-day/discharge mortality. Secondary outcome measures included diagnostic accuracy, changes in vital signs, acid-base status, and length of stay. Categorical data was analyzed using Fishers test, and continuous data by Student T test and multi-level log-regression testing. (GraphPad/SPSS) Final chart review was blinded to initial impressions and PoCUS findings. Results: 258 patients were enrolled with follow-up fully completed. Baseline comparisons confirmed effective randomization. There was no difference between groups for the primary outcome of mortality; PoCUS 32/129 (24.8%; 95% CI 14.3-35.3%) vs. Control 32/129 (24.8%; 95% CI 14.3-35.3%); RR 1.00 (95% CI 0.869 to 1.15; p=1.00). There were no differences in the secondary outcomes; ICU and total length of stay. Our sample size has a power of 0.80 (α:0.05) for a moderate effect size. Other secondary outcomes are reported separately. Conclusion: This is the first RCT to compare PoCUS to standard care for undifferentiated hypotensive ED patients. We did not find any mortality or length of stay benefits with the use of a PoCUS protocol, though a larger study is required to confirm these findings. While PoCUS may have diagnostic benefits, these may not translate into a survival benefit effect.
Introduction: Point of Care Ultrasound (PoCUS) protocols are commonly used to guide resuscitation for emergency department (ED) patients with undifferentiated non-traumatic hypotension. While PoCUS has been shown to improve early diagnosis, there is a minimal evidence for any outcome benefit. We completed an international multicenter randomized controlled trial (RCT) to assess the impact of a PoCUS protocol on key resuscitation markers in this group. We report diagnostic impact and mortality elsewhere. Methods: The SHoC-ED1 study compared the addition of PoCUS to standard care within the first hour in the treatment of adult patients presenting with undifferentiated hypotension (SBP<100 mmHg or a Shock Index >1.0) with a control group that did not receive PoCUS. Scans were performed by PoCUS-trained physicians. 4 North American, and 3 South African sites participated in the study. Resuscitation outcomes analyzed included volume of fluid administered in the ED, changes in shock index (SI), modified early warning score (MEWS), venous acid-base balance, and lactate, at one and four hours. Comparisons utilized a T-test as well as stratified binomial log-regression to assess for any significant improvement in resuscitation amount the outcomes. Our sample size was powered at 0.80 (α:0.05) for a moderate effect size. Results: 258 patients were enrolled with follow-up fully completed. Baseline comparisons confirmed effective randomization. There was no significant difference in mean total volume of fluid received between the control (1658 ml; 95%CI 1365-1950) and PoCUS groups (1609 ml; 1385-1832; p=0.79). Significant improvements were seen in SI, MEWS, lactate and bicarbonate with resuscitation in both the PoCUS and control groups, however there was no difference between groups. Conclusion: SHOC-ED1 is the first RCT to compare PoCUS to standard of care in hypotensive ED patients. No significant difference in fluid used, or markers of resuscitation was found when comparing the use of a PoCUS protocol to that of standard of care in the resuscitation of patients with undifferentiated hypotension.
Introduction: Point of care ultrasonography (PoCUS) is an established tool in the initial management of hypotensive patients in the emergency department (ED). It has been shown rule out certain shock etiologies, and improve diagnostic certainty, however evidence on benefit in the management of hypotensive patients is limited. We report the findings from our international multicenter RCT assessing the impact of a PoCUS protocol on diagnostic accuracy, as well as other key outcomes including mortality, which are reported elsewhere. Methods: Recruitment occurred at 4 North American and 3 Southern African sites. Screening at triage identified patients (SBP<100 mmHg or shock index >1) who were randomized to either PoCUS or control groups. Scans were performed by PoCUS-trained physicians. Demographics, clinical details and findings were collected prospectively. Initial and secondary diagnoses were recorded at 0 and 60 minutes, with ultrasound performed in the PoCUS group prior to secondary assessment. Final chart review was blinded to initial impressions and PoCUS findings. Categorical data was analyzed using Fishers two-tailed test. Our sample size was powered at 0.80 (α:0.05) for a moderate effect size. Results: 258 patients were enrolled with follow-up fully completed. Baseline comparisons confirmed effective randomization. The perceived shock category changed more frequently in the PoCUS group 20/127 (15.7%) vs. control 7/125 (5.6%); RR 2.81 (95% CI 1.23 to 6.42; p=0.0134). There was no significant difference in change of diagnostic impression between groups PoCUS 39/123 (31.7%) vs control 34/124 (27.4%); RR 1.16 (95% CI 0.786 to 1.70; p=0.4879). There was no significant difference in the rate of correct category of shock between PoCUS (118/127; 93%) and control (113/122; 93%); RR 1.00 (95% CI 0.936 to 1.08; p=1.00), or for correct diagnosis; PoCUS 90/127 (70%) vs control 86/122 (70%); RR 0.987 (95% CI 0.671 to 1.45; p=1.00). Conclusion: This is the first RCT to compare PoCUS to standard care for undifferentiated hypotensive ED patients. We found that the use of PoCUS did change physicians’ perceived shock category. PoCUS did not improve diagnostic accuracy for category of shock or diagnosis.
Animal health surveillance enables the detection and control of animal diseases including zoonoses. Under the EU-FP7 project RISKSUR, a survey was conducted in 11 EU Member States and Switzerland to describe active surveillance components in 2011 managed by the public or private sector and identify gaps and opportunities. Information was collected about hazard, target population, geographical focus, legal obligation, management, surveillance design, risk-based sampling, and multi-hazard surveillance. Two countries were excluded due to incompleteness of data. Most of the 664 components targeted cattle (26·7%), pigs (17·5%) or poultry (16·0%). The most common surveillance objectives were demonstrating freedom from disease (43·8%) and case detection (26·8%). Over half of components applied risk-based sampling (57·1%), but mainly focused on a single population stratum (targeted risk-based) rather than differentiating between risk levels of different strata (stratified risk-based). About a third of components were multi-hazard (37·3%). Both risk-based sampling and multi-hazard surveillance were used more frequently in privately funded components. The study identified several gaps (e.g. lack of systematic documentation, inconsistent application of terminology) and opportunities (e.g. stratified risk-based sampling). The greater flexibility provided by the new EU Animal Health Law means that systematic evaluation of surveillance alternatives will be required to optimize cost-effectiveness.
Urban slum environments in the tropics are conducive to the proliferation and the spread of rodent-borne zoonotic pathogens to humans. Calodium hepaticum (Brancroft, 1893) is a zoonotic nematode known to infect a variety of mammalian hosts, including humans. Norway rats (Rattus norvegicus) are considered the most important mammalian host of C. hepaticum and are therefore a potentially useful species to inform estimates of the risk to humans living in urban slum environments. There is a lack of studies systematically evaluating the role of demographic and environmental factors that influence both carriage and intensity of infection of C. hepaticum in rodents from urban slum areas within tropical regions. Carriage and the intensity of infection of C. hepaticum were studied in 402 Norway rats over a 2-year period in an urban slum in Salvador, Brazil. Overall, prevalence in Norway rats was 83% (337/402). Independent risk factors for C. hepaticum carriage in R. norvegicus were age and valley of capture. Of those infected the proportion with gross liver involvement (i.e. >75% of the liver affected, a proxy for a high level intensity of infection), was low (8%, 26/337). Sixty soil samples were collected from ten locations to estimate levels of environmental contamination and provide information on the potential risk to humans of contracting C. hepaticum from the environment. Sixty percent (6/10) of the sites were contaminated with C. hepaticum. High carriage levels of C. hepaticum within Norway rats and sub-standard living conditions within slum areas may increase the risk to humans of exposure to the infective eggs of C. hepaticum. This study supports the need for further studies to assess whether humans are becoming infected within this community and whether C. hepaticum is posing a significant risk to human health.
Burnt mounds, or fulachtaí fiadh as they are known in Ireland, are probably the most common prehistoric site type in Ireland and Britain. Typically Middle–Late Bronze Age in age (although both earlier and later examples are known), they are artefact-poor and rarely associated with settlements. The function of these sites has been much debated with the most commonly cited uses being for cooking, as steam baths or saunas, for brewing, tanning, or textile processing. A number of major infrastructural development schemes in Ireland in the years 2002–2007 revealed remarkable numbers of these mounds often associated with wood-lined troughs, many of which were extremely well-preserved. This afforded an opportunity to investigate them as landscape features using environmental techniques – specifically plant macrofossils and charcoal, pollen, beetles, and multi-element analyses. This paper summarises the results from eight sites from Ireland and compares them with burnt mound sites in Great Britain. The fulachtaí fiadh which are generally in clusters, are all groundwater-fed by springs, along floodplains and at the bases of slopes. The sites are associated with the clearance of wet woodland for fuel; most had evidence of nearby agriculture and all revealed low levels of grazing. Multi-element analysis at two sites revealed elevated heavy metal concentrations suggesting that off-site soil, ash or urine had been used in the trough. Overall the evidence suggests that the most likely function for these sites is textile production involving both cleaning and/or dyeing of wool and/or natural plant fibres and as a functionally related activity to hide cleaning and tanning. Whilst further research is clearly needed to confirm if fulachtaí fiadh are part of the ‘textile revolution’ we should also recognise their important role in the rapid deforestation of the wetter parts of primary woodland and the expansion of agriculture into marginal areas during the Irish and British Bronze Ages.
Background: Planning for neurology training necessitated a reflection on the experience of graduates. We explored practice characteristics, and training experience of recent graduates. Methods: Graduates from 2010-2014 completed a survey. Results: Response rate was 37% of 211. 56% were female. 91% were adult neurologists. 65% practiced in an outpatient setting. 63% worked in academics. 85% completed subspecialty training (median 1 year). 36% work 3 days a week or less. 82% took general call (median 1 night weekly). Role preparation was considered very good or excellent for most; however poor or fair ratings were 17% in advocacy and 8% in leadership. Training feedback was at least “good” for 87%. Burnout a few times a week or more was noted by 5% (6% during residency, particularly PGY1 and 5). 64% felt overly burdened by paperwork. Although most felt training was adequate, it was poor or fair at preparing for practice management (85%) and personal balance (55%). Most conditions were under-observed in training environment. Many noted a need for more independent practice development and community neurology. Conclusions: Although our training was found to be very good, some identified needs included advocacy training, and more training in general neurology in the longitudinal outpatient/community settings.
We present maps in the visible emission lines of [S II] and the infrared emission lines of H2, at 2.12μm, for several bipolar outflow complexes which exhibit jet structures. A comparison of the morphology of this infrared emission and that seen in visible emission lines shows both the visible and the H2 emission exhibit clumpy structure on similar scales. It appears that the brightest H2 emission occurs at the working surfaces of the jets. Virtually no H2 emission is associated with the jets themselves.
Velocity profiles are presented for several objects and possible emission mechanisms are discussed.
Gene × Environment interaction contributes to externalizing disorders in childhood and adolescence, but little is known about whether such effects are long lasting or present in adulthood. We examined gene–environment interplay in the concurrent and prospective associations between antisocial peer affiliation and externalizing disorders (antisocial behavior and substance use disorders) at ages 17, 20, 24, and 29. The sample included 1,382 same-sex twin pairs participating in the Minnesota Twin Family Study. We detected a Gene × Environment interaction at age 17, such that additive genetic influences on antisocial behavior and substance use disorders were greater in the context of greater antisocial peer affiliation. This Gene × Environment interaction was not present for antisocial behavior symptoms after age 17, but it was for substance use disorder symptoms through age 29 (though effect sizes were largest at age 17). The results suggest adolescence is a critical period for the development of externalizing disorders wherein exposure to greater environmental adversity is associated with a greater expression of genetic risk. This form of Gene × Environment interaction may persist through young adulthood for substance use disorders, but it appears to be limited to adolescence for antisocial behavior.
The Evolutionary Map of the Universe (EMU) is a proposed radio continuum survey
of the Southern Hemisphere up to declination + 30°, with the Australian
Square Kilometre Array Pathfinder (ASKAP). EMU will use an automated source
identification and measurement approach that is demonstrably optimal, to
maximise the reliability and robustness of the resulting radio source
catalogues. As a step toward this goal we conducted a “Data
Challenge” to test a variety of source finders on simulated images. The
aim is to quantify the accuracy and limitations of existing automated source
finding and measurement approaches. The Challenge initiators also tested the
current ASKAPsoft source-finding tool to establish how it could benefit from
incorporating successful features of the other tools. As expected, most finders
show completeness around 100% at ≈ 10σ dropping to about 10% by
≈ 5σ. Reliability is typically close to 100% at ≈
10σ, with performance to lower sensitivities varying between finders. All
finders show the expected trade-off, where a high completeness at low
signal-to-noise gives a corresponding reduction in reliability, and vice versa.
We conclude with a series of recommendations for improving the performance of
the ASKAPsoft source-finding tool.