To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This article considers whether candidates strategically use emotional rhetoric in social media messages similar to the way that fear appeals are used strategically in televised campaign advertisements. We use a dataset of tweets issued by the campaign accounts of candidates for the US House of Representatives during the last two months of the 2018 midterm elections to determine whether candidate vulnerability predicts the presence of certain emotions in social media messages. Contrary to theoretical expectations, we find that vulnerability does not appear to inspire candidates to use more anxious language in their tweets. However, we do find evidence of a surprising relationship between sad rhetoric and vulnerability and that campaign context influences the use of other forms of negative rhetoric in tweets.
Recent declines of wild pollinators and infections in honey, bumble and other bee species have raised concerns about pathogen spillover from managed honey and bumble bees to other pollinators. Parasites of honey and bumble bees include trypanosomatids and microsporidia that often exhibit low host specificity, suggesting potential for spillover to co-occurring bees via shared floral resources. However, experimental tests of trypanosomatid and microsporidial cross-infectivity outside of managed honey and bumble bees are scarce. To characterize potential cross-infectivity of honey and bumble bee-associated parasites, we inoculated three trypanosomatids and one microsporidian into five potential hosts – including four managed species – from the apid, halictid and megachilid bee families. We found evidence of cross-infection by the trypanosomatids Crithidia bombi and C. mellificae, with evidence for replication in 3/5 and 3/4 host species, respectively. These include the first reports of experimental C. bombi infection in Megachile rotundata and Osmia lignaria, and C. mellificae infection in O. lignaria and Halictus ligatus. Although inability to control amounts inoculated in O. lignaria and H. ligatus hindered estimates of parasite replication, our findings suggest a broad host range in these trypanosomatids, and underscore the need to quantify disease-mediated threats of managed social bees to sympatric pollinators.
Significant experimental evidence supports fat as a taste modality; however, the associated peripheral mechanisms are not well established. Several candidate taste receptors have been identified, but their expression pattern and potential functions in human fungiform papillae remain unknown. The aim of this study is to identify the fat taste candidate receptors and ion channels that were expressed in human fungiform taste buds and their association with oral sensory of fatty acids. For the expression analysis, quantitative RT-PCR (qRT-PCR) from RNA extracted from human fungiform papillae samples was used to determine the expression of candidate fatty acid receptors and ion channels. Western blotting analysis was used to confirm the presence of the proteins in fungiform papillae. Immunohistochemistry analysis was used to localise the expressed receptors or ion channels in the taste buds of fungiform papillae. The correlation study was analysed between the expression level of the expressed fat taste receptors or ion channels indicated by qRT-PCR and fat taste threshold, liking of fatty food and fat intake. As a result, qRT-PCR and western blotting indicated that mRNA and protein of CD36, FFAR4, FFAR2, GPR84 and delayed rectifying K+ channels are expressed in human fungiform taste buds. The expression level of CD36 was associated with the liking difference score (R −0·567, β=−0·04, P=0·04) between high-fat and low-fat food and FFAR2 was associated with total fat intake (ρ=−0·535, β=−0·01, P=0·003) and saturated fat intake (ρ=−0·641, β=−0·02, P=0·008).
Simulation-based education (SBE) is an important training strategy in emergency medicine (EM) postgraduate programs. This study sought to characterize the use of simulation in FRCPC-EM residency programs across Canada.
A national survey was administered to residents and knowledgeable program representatives (PRs) at all Canadian FRCPC-EM programs. Survey question themes included simulation program characteristics, the frequency of resident participation, the location and administration of SBE, institutional barriers, interprofessional involvement, content, assessment strategies, and attitudes about SBE.
Resident and PR response rates were 63% (203/321) and 100% (16/16), respectively. Residents reported a median of 20 (range 0–150) hours of annual simulation training, with 52% of residents indicating that the time dedicated to simulation training met their needs. PRs reported the frequency of SBE sessions ranging from weekly to every 6 months, with 15 (94%) programs having an established simulation curriculum. Two (13%) of the programs used simulation for resident assessment, although 15 (94%) of PRs indicated that they would be comfortable with simulation-based assessment. The most common PR-identified barriers to administering simulation were a lack of protected faculty time (75%) and a lack of faculty experience with simulation (56%). Interprofessional involvement in simulation was strongly valued by both residents and PRs.
SBE is frequently used by Canadian FRCPC-EM residency programs. However, there exists considerable variability in the structure, frequency, and timing of simulation-based activities. As programs transition to competency-based medical education, national organizations and collaborations should consider the variability in how SBE is administered.
Adequate pain relief at the scene of injury and during transport to hospital is a major challenge in all acute traumas, especially for those with hip fractures, whose injuries are difficult to immobilize and long-term outcomes may be adversely affected by administration of opiate analgesics. Fascia Iliaca Compartment Block (FICB) is a procedure routinely undertaken by clinicians in emergency departments for hip fracture patients, but use by paramedics at the scene of emergency calls, is not yet evaluated (1).
We undertook a randomized controlled feasibility trial using novel audited scratchcard randomization to allocate eligible patients to FICB or usual care. Paramedics are recruited and trained to assess patients for hip fracture and carry out FICB. We will follow up patients to assess accuracy of paramedic diagnosis, acceptability to patients and paramedics, compliance of paramedics and also measures of pain, side effects, time in hospital and quality of life in order to plan a full trial if appropriate. The primary outcome measure is health related quality of life, measured using Short Form (SF)-12 at 1 and 6 months. Interviews and focus groups will be used to understand acceptability of FICB to patients and paramedics. This study was funded by Health and Care Research Wales (1003).
We have developed:
•paramedic pathway to assess patients for hip fracture and FICB
•paramedic training package, delivered by Consultant Anaesthetist
To date we have recruited nineteen paramedics; ten are fully trained and recruiting patients, the remainder are being trained. Fifty-four patients have been randomized and thirty-five have consented to follow-up. Thirteen 1-month and five 6-month follow-up questionnaires have been received.
This study will enable us to recommend whether to undertake a definitive multi-centre randomized controlled trial of FICB by paramedics for hip fracture to determine if the procedure is effective for patients and worthwhile for the National Health Service.
New approaches are needed to safely reduce emergency admissions to hospital by targeting interventions effectively in primary care. A predictive risk stratification tool (PRISM) identifies each registered patient's risk of an emergency admission in the following year, allowing practitioners to identify and manage those at higher risk. We evaluated the introduction of PRISM in primary care in one area of the United Kingdom, assessing its impact on emergency admissions and other service use.
We conducted a randomized stepped wedge trial with cluster-defined control and intervention phases, and participant-level anonymized linked outcomes. PRISM was implemented in eleven primary care practice clusters (total thirty-two practices) over a year from March 2013. We analyzed routine linked data outcomes for 18 months.
We included outcomes for 230,099 registered patients, assigned to ranked risk groups.
Overall, the rate of emergency admissions was higher in the intervention phase than in the control phase: adjusted difference in number of emergency admissions per participant per year at risk, delta = .011 (95 percent Confidence Interval, CI .010, .013). Patients in the intervention phase spent more days in hospital per year: adjusted delta = .029 (95 percent CI .026, .031). Both effects were consistent across risk groups.
Primary care activity increased in the intervention phase overall delta = .011 (95 percent CI .007, .014), except for the two highest risk groups which showed a decrease in the number of days with recorded activity.
Introduction of a predictive risk model in primary care was associated with increased emergency episodes across the general practice population and at each risk level, in contrast to the intended purpose of the model. Future evaluation work could assess the impact of targeting of different services to patients across different levels of risk, rather than the current policy focus on those at highest risk.
Emergency admissions to hospital are a major financial burden on health services. In one area of the United Kingdom (UK), we evaluated a predictive risk stratification tool (PRISM) designed to support primary care practitioners to identify and manage patients at high risk of admission. We assessed the costs of implementing PRISM and its impact on health services costs. At the same time as the study, but independent of it, an incentive payment (‘QOF’) was introduced to encourage primary care practitioners to identify high risk patients and manage their care.
We conducted a randomized stepped wedge trial in thirty-two practices, with cluster-defined control and intervention phases, and participant-level anonymized linked outcomes. We analysed routine linked data on patient outcomes for 18 months (February 2013 – September 2014). We assigned standard unit costs in pound sterling to the resources utilized by each patient. Cost differences between the two study phases were used in conjunction with differences in the primary outcome (emergency admissions) to undertake a cost-effectiveness analysis.
We included outcomes for 230,099 registered patients. We estimated a PRISM implementation cost of GBP0.12 per patient per year.
Costs of emergency department attendances, outpatient visits, emergency and elective admissions to hospital, and general practice activity were higher per patient per year in the intervention phase than control phase (adjusted δ = GBP76, 95 percent Confidence Interval, CI GBP46, GBP106), an effect that was consistent and generally increased with risk level.
Despite low reported use of PRISM, it was associated with increased healthcare expenditure. This effect was unexpected and in the opposite direction to that intended. We cannot disentangle the effects of introducing the PRISM tool from those of imposing the QOF targets; however, since across the UK predictive risk stratification tools for emergency admissions have been introduced alongside incentives to focus on patients at risk, we believe that our findings are generalizable.
A predictive risk stratification tool (PRISM) to estimate a patient's risk of an emergency hospital admission in the following year was trialled in general practice in an area of the United Kingdom. PRISM's introduction coincided with a new incentive payment (‘QOF’) in the regional contract for family doctors to identify and manage the care of people at high risk of emergency hospital admission.
Alongside the trial, we carried out a complementary qualitative study of processes of change associated with PRISM's implementation. We aimed to describe how PRISM was understood, communicated, adopted, and used by practitioners, managers, local commissioners and policy makers. We gathered data through focus groups, interviews and questionnaires at three time points (baseline, mid-trial and end-trial). We analyzed data thematically, informed by Normalisation Process Theory (1).
All groups showed high awareness of PRISM, but raised concerns about whether it could identify patients not yet known, and about whether there were sufficient community-based services to respond to care needs identified. All practices reported using PRISM to fulfil their QOF targets, but after the QOF reporting period ended, only two practices continued to use it. Family doctors said PRISM changed their awareness of patients and focused them on targeting the highest-risk patients, though they were uncertain about the potential for positive impact on this group.
Though external factors supported its uptake in the short term, with a focus on the highest risk patients, PRISM did not become a sustained part of normal practice for primary care practitioners.
The present study investigated the relationship between the milk protein content of a rehydration solution and fluid balance after exercise-induced dehydration. On three occasions, eight healthy males were dehydrated to an identical degree of body mass loss (BML, approximately 1·8 %) by intermittent cycling in the heat, rehydrating with 150 % of their BML over 1 h with either a 60 g/l carbohydrate solution (C), a 40 g/l carbohydrate, 20 g/l milk protein solution (CP20) or a 20 g/l carbohydrate, 40 g/l milk protein solution (CP40). Urine samples were collected pre-exercise, post-exercise, post-rehydration and for a further 4 h. Subjects produced less urine after ingesting the CP20 or CP40 drink compared with the C drink (P< 0·01), and at the end of the study, more of the CP20 (59 (sd 12) %) and CP40 (64 (sd 6) %) drinks had been retained compared with the C drink (46 (sd 9) %) (P< 0·01). At the end of the study, whole-body net fluid balance was more negative for trial C ( − 470 (sd 154) ml) compared with both trials CP20 ( − 181 (sd 280) ml) and CP40 ( − 107 (sd 126) ml) (P< 0·01). At 2 and 3 h after drink ingestion, urine osmolality was greater for trials CP20 and CP40 compared with trial C (P< 0·05). The present study further demonstrates that after exercise-induced dehydration, a carbohydrate–milk protein solution is better retained than a carbohydrate solution. The results also suggest that high concentrations of milk protein are not more beneficial in terms of fluid retention than low concentrations of milk protein following exercise-induced dehydration.
Cultivation is a critical component of organic weed management and has relevance in conventional farming. Limitations with current cultivation tools include high costs, limited efficacy, and marginal applicability across a range of crops, soil types, soil moisture conditions, and weed growth stages. The objectives of this research were to compare the weed control potential of two novel tools, a block cultivator and a stirrup cultivator, with that of a conventional S-tine cultivator, and to evaluate crop response when each tool was used in pepper and broccoli. Block and stirrup cultivators were mounted on a toolbar with an S-tine sweep. In 2008, the tripart cultivator was tested in 20 independently replicated noncrop field events. Weed survival and reemergence data were collected from the cultivated area of each of the three tools. Environmental data were also collected. A multivariable model was created to assess the importance of cultivator design and environmental and operational variables on postcultivation weed survival. Additional trials in 2009 evaluated the yield response of pepper and broccoli to interrow cultivations with each tool. Cultivator design significantly influenced postcultivation weed survival (P < 0.0001). When weed survival was viewed collectively across all 20 cultivations, both novel cultivators significantly increased control. Relative to the S-tine sweep, the stirrup cultivator reduced weed survival by about one-third and the block cultivator reduced weed survival by greater than two-thirds. Of the 11 individually assessed environmental and operational parameters, 7 had significant implications for weed control with the sweep; 5 impacted control with the stirrup cultivator, and only 1 (surface weed cover at the time of cultivation) influenced control with the block cultivator. Crop response to each cultivator was identical. The block cultivator, because of its increased effectiveness and operational flexibility, has the potential to improve interrow mechanical weed management.
Cultivation tools have a long history of use. The integration of cultivation within current organic and conventional weed management programs is conditional on the availability of functional, practical cultivation tools. However, there are performance and operational limitations with current cultivation tools. Serviceable improvement in weed control is the impetus behind creation of new tool designs. The primary objective of this research was to design and construct two cultivators that might address the limitations of current cultivation tools. A secondary objective was to identify historical influences on the technology, availability, and capability of cultivation tools. Two new tractor-mounted cultivators were designed and constructed as loose extractions of antique handheld tools. The first tool, a block cultivator, has a flat surface in the front of the tool that rests against the soil and limits the entrance of a rear-mounted blade. The second tool resembles a stirrup hoe, where a horizontal steel blade with a beveled front edge slices through the upper layer of the soil. Block and stirrup cultivator units were mounted on a toolbar with a traditional S-tine sweep, so that the novel cultivators could be compared directly with a common standard. Relative to the S-tine sweep, the stirrup cultivator reduced weed survival by about one-third and the block cultivator reduced weed survival by greater than two-thirds. Of the three tools, block cultivator performance was least influenced by environmental and operational variances.
The paper reports on the fourth (2010) season of fieldwork of the Cyrenaican Prehistory Project, and on further results of analyses of artefacts and organic materials collected in the 2009 season. Ground-based LiDar has provided both an accurate 3D scan of the Haua Fteah cave and information on the cave's morphometry or origins. The excavations in the cave focussed on Middle Palaeolithic or Middle Stone Age ‘Pre-Aurignacian’ layers below the base of the Middle Trench beside the McBurney Deep Sounding (Trench D) and on Final Palaeolithic ‘Oranian’ layers beside the upper part of the Middle Trench (Trench M). Although McBurney referred to the upper part of the Deep Sounding as more or less sterile, the 2010 excavations found evidence for small-scale but regular human presence in the form of stone artefacts and debitage, though given the sedimentary context the latter are unlikely to represent in situ knapping. The excavations of Trench M extended from the basal Capsian layers investigated in 2009 through Oranian layers to the transition with the Dabban Upper Palaeolithic. Some 17,000 lithic pieces have been studied from the Capsian and Oranian layers excavated in Trench M, in an area measuring less than 2 m by 1 m by 1.1 m deep, along with numerous animal bones, molluscs, and macrobotanical remains, as well as occasional shell beads. Preliminary studies of the lithics, bones, molluscs, and plant remains are revealing the changing character of late Pleistocene (Oranian) and early Holocene (Capsian) occupation in the Haua Fteah. Alongside the work in the Haua Fteah, the project continued its assessment of the Quaternary and archaeological sequences of the Cyrenaican coastland and completed a transect survey of surface lithic materials and their landform contexts from the pre-desert across the Gebel Akhdar to the coast, with a new focus on the al-Marj basin. Significant differences are emerging in patterns of Middle Palaeolithic and later hominin occupation and palaeodemography.
Switching resistance of a metal/oxide/metal structure (Pt/PrxCa1-xMnO3/Pt) upon the stimulation of electric pulse has triggered vast research interests and activities for next generation resistive RAM application. Continued from an earlier paper  that studied the mechanism of switching resistance, this paper extends to the switching endurance discussion using admittance spectra. Experimental data indicated that there exist interfacial dipoles (or states) at metal/oxide interfaces. Switching resistance comes from the change of interfacial dipoles. However, those interfacial dipoles are all meta-stable indicating the problem of long term switching endurance. We have tested many metal/PCMO contact combinations and characterized those contacts with basic memory criteria: bit separation, data retention, switch endurance, switching speed, and readout limitations. Among them, TiN/PCMO showed large bit separation, excellent data retention, and fast switching speed, but failed long term switching endurance test. Furthermore, the poor switch endurance is discussed using energy-well diagram. Therefore, a material system providing bi-stable states with reasonably large free energy separations is a must for this type of RRAM application. These energy levels can come from either structure or electrochemical potential differences at the metal/oxide interfaces.
Wide-bandgap III-nitride-based avalanche photodiodes (APDs) are important for photodetectors operating in UV spectral region. For the growth of GaN-based heteroepitaxial layers on lattice-mismatched substrates such as sapphire and SiC, a high density of defects is introduced, thereby causing device failure by premature microplasma breakdown before the electric field reaches the level of the bulk avalanche breakdown field, which has hampered the development of III-nitride based APDs. In this study, we investigate the growth and characterization of GaN and AlGaN-based APDs on free-standing bulk GaN substrates. Epitaxial layers of GaN and AlxGa1−xN p-i-n ultraviolet avalanche photodiodes were grown by metalorganic chemical vapor deposition (MOCVD). Improved crystalline and structural quality of epitaxial layers was achieved by employing optimum growth parameters on low-dislocation-density bulk substrates in order to minimize the defect density in epitaxially grown materials. GaN and AlGaN APDs were fabricated into 30μm- and 50μm-diameter circular mesas and the electrical and optoelectronic characteristics were measured. APD epitaxial structure and device design, material growth optimization, material characterizations, device fabrication, and device performance characteristics are reported.
Nobel metals in contact with perovskite metal oxides, Pr1−xCaxMnO3 (PCMO) for example, have shown switching resistance values, with a couple of orders of magnitude difference, upon the stimulation of electrical pulses. In this paper, the Pt/PCMO/Pt structures were made through e-beam evaporation (Pt electrodes) and RF sputtering (PCMO films) for the switching mechanism study. Specially designed experiments along with extensive electrical characterizations were performed on the Pt/PCMO/Pt structure. The existence of a contact resistance, or an interfacial layer, between Pt electrodes and PCMO was evident by simply measuring initial resistance (R0) of the stack against PMCO film thickness. Temperature dependence of the R0 and the time-bias tests were used to study the transport mechanism in the bulk of PCMO and the interfacial layer. Above a threshold voltage, the transport changes from mainly electronic to ionic conduction especially at the interfaces, causing electrode polarization. The transient characteristics of Pt/PCMO/Pt stack, i.e. the response to the pulsing in the time domain, were characterized in the frequency domain instead through the admittance spectroscopy measurements. The temperature dependence of Cole-Cole plots were used to study the polarized interfacial layer or interface dipole polarization (IDP). This IDP layer is the origin of contact resistance and also responsible for the uni-polar long-short switching because the calculated relaxation time constants of the IDP corresponding to low resistive state (LRS) and high resistive state (HRS) were similar to the experimental values. Therefore, the so-called bi-stable resistive states are just two different IDP states: one is much leakier than the other.