To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Rock debris covers ~30% of glacier ablation areas in the Central Himalaya and modifies the impact of atmospheric conditions on mass balance. The thermal properties of supraglacial debris are diurnally variable but remain poorly constrained for monsoon-influenced glaciers over the timescale of the ablation season. We measured vertical debris profile temperatures at 12 sites on four glaciers in the Everest region with debris thickness ranging from 0.08 to 2.8 m. Typically, the length of the ice ablation season beneath supraglacial debris was 160 days (15 May to 22 October)—a month longer than the monsoon season. Debris temperature gradients were approximately linear (r2 > 0.83), measured as −40°C m–1 where debris was up to 0.1 m thick, −20°C m–1 for debris 0.1–0.5 m thick, and −4°C m–1 for debris greater than 0.5 m thick. Our results demonstrate that the influence of supraglacial debris on the temperature of the underlying ice surface, and therefore melt, is stable at a seasonal timescale and can be estimated from near-surface temperature. These results have the potential to greatly improve the representation of ablation in calculations of debris-covered glacier mass balance and projections of their response to climate change.
Animal-derived dietary protein ingestion and physical activity stimulate myofibrillar protein synthesis rates in older adults. We determined whether a non-animal-derived diet can support daily myofibrillar protein synthesis rates to the same extent as an omnivorous diet. Nineteen healthy older adults (aged 66 (sem 1) years; BMI 24 (sem 1) kg/m2; twelve males, seven females) participated in a randomised, parallel-group, controlled trial during which they consumed a 3-d isoenergetic high-protein (1·8 g/kg body mass per d) diet, where the protein was provided from predominantly (71 %) animal (OMNI; n 9; six males, three females) or exclusively vegan (VEG; n 10; six males, four females; mycoprotein providing 57 % of daily protein intake) sources. During the dietary control period, participants conducted a daily bout of unilateral resistance-type leg extension exercise. Before the dietary control period, participants ingested 400 ml of deuterated water, with 50-ml doses consumed daily thereafter. Saliva samples were collected throughout to determine body water 2H enrichments, and muscle samples were collected from rested and exercised muscle to determine daily myofibrillar protein synthesis rates. Deuterated water dosing resulted in body water 2H enrichments of approximately 0·78 (sem 0·03) %. Daily myofibrillar protein synthesis rates were 13 (sem 8) (P = 0·169) and 12 (sem 4) % (P = 0·016) greater in the exercised compared with rested leg (1·59 (sem 0·12) v. 1·77 (sem 0·12) and 1·76 (sem 0·14) v. 1·93 (sem 0·12) %/d) in OMNI and VEG groups, respectively. Daily myofibrillar protein synthesis rates did not differ between OMNI and VEG in either rested or exercised muscle (P > 0·05). Over the course of a 3-d intervention, omnivorous- or vegan-derived dietary protein sources can support equivalent rested and exercised daily myofibrillar protein synthesis rates in healthy older adults consuming a high-protein diet.
Psychosocial interventions that mitigate psychosocial distress in cancer patients are important. The primary aim of this study was to examine the feasibility and acceptability of an adaptation of the Mindful Self-Compassion (MSC) program among adult cancer patients. A secondary aim was to examine pre–post-program changes in psychosocial wellbeing.
The research design was a feasibility and acceptability study, with an examination of pre- to post-intervention changes in psychosocial measures. A study information pack was posted to 173 adult cancer patients 6 months–5 years post-diagnosis, with an invitation to attend an eight-week group-based adaptation of the MSC program.
Thirty-two (19%) consented to the program, with 30 commencing. Twenty-seven completed the program (mean age: 62.93 years, SD 14.04; 17 [63%] female), attending a mean 6.93 (SD 1.11) group sessions. There were no significant differences in medico-demographic factors between program-completers and those who did not consent. However, there was a trend toward shorter time since diagnosis in the program-completers group. Program-completers rated the program highly regarding content, relevance to the concerns of cancer patients, and the likelihood of recommending the program to other cancer patients. Sixty-three percent perceived that their mental wellbeing had improved from pre- to post-program; none perceived a deterioration in mental wellbeing. Small-to-medium effects were observed for depressive symptoms, fear of cancer recurrence, stress, loneliness, body image satisfaction, mindfulness, and self-compassion.
Significance of results
The MSC program appears feasible and acceptable to adults diagnosed with non-advanced cancer. The preliminary estimates of effect sizes in this sample suggest that participation in the program was associated with improvements in psychosocial wellbeing. Collectively, these findings suggest that there may be value in conducting an adequately powered randomized controlled trial to determine the efficacy of the MSC program in enhancing the psychosocial wellbeing of cancer patients.
Increasingly, ambulance services offer alternatives to transfer to the emergency department (ED), when this is better for patients. The introduction of electronic health records (EHR) in ambulance services is encouraged by national policy across the United Kingdom (UK) but roll-out has been variable and complex.
Electronic Records in Ambulances (ERA) is a two-year study which aims to investigate and describe the opportunities and challenges of implementing EHR and associated technology in ambulances to support a safe and effective shift to out of hospital care, including the implications for workforce in terms of training, role and clinical decision-making skills.
Our study includes a scoping review of relevant issues and a baseline assessment of progress in all UK ambulance services in implementing EHR. These will inform four in-depth case studies of services at different stages of implementation, assessing current usage, and examining context.
The scoping review identified themes including: there are many perceived potential benefits of EHR, such as improved safety and remote diagnostics, but as yet little evidence of them; technical challenges to implementation may inhibit uptake and lead to increased workload in the short term; staff implementing EHR may do so selectively or devise workarounds; and EHR may be perceived as a tool of staff surveillance.
Our scoping review identified some complex issues around the implementation of EHR and the relevant challenges, opportunities and workforce implications. These will help to inform our fieldwork and subsequent data analysis in the case study sites, to begin early in 2017. Lessons learned from the experience of implementing EHR so far should inform future development of information technology in ambulance services, and help service providers to understand how best to maximize the opportunities offered by EHR to redesign care.
Following the 2016 U.S. election, researchers and policymakers have become intensely concerned about the dissemination of “fake news,” or false news stories in circulation (Lazer et al., 2017). Research indicates that fake news is shared widely and has a pro-Republican tilt (Allcott and Gentzkow, 2017). Facebook now flags dubious stories as disputed and tries to block fake news publishers (Mosseri, 2016). While the typical misstatements of politicians can be corrected (Nyhan et al., 2017), the sheer depth of fake news’s conspiracizing may preclude correction. Can fake news be corrected?
Anthony M. Kwasnica, Smeal College of Business, The Pennsylvania State University,
John O. Ledyard, Division of the Humanities and Social Sciences, California Institute of Technology,
David P. Porter, Economic Science Institute, Chapman University,
Christine DeMartini, Division of the Humanities and Social Sciences, California Institute of Technology
Theory, experiment and practice suggest that, when bidder valuations for multiple objects are super-additive, combinatorial auctions are needed to increase efficiency, seller revenue, and bidder willingness to participate (Bykowsky et al. 2000, Rassenti et al. 1982, Ledyard et al. 2002). A combinatorial auction is an auction in which bidders are allowed to express bids in terms of packages of objects. The now famous FCC spectrum auctions are a good example of the relevance of these issues. In 41 auction events from 1994 to 2003, the FCC used what is known as a Simultaneous Multiple Round (SMR) auction to allocate spectrum and raise over $40 billion in revenue. This auction format does not allow package bidding. The FCC auctions also divide the spectrum by geographic location. It is reasonable to expect that some bidders might receive extra benefits by obtaining larger, more contiguous portions of the spectrum. A firm might enjoy cost savings if they could purchase two adjacent locations. However, without package bidding, a bidder cannot express that preference, potentially lowering the efficiency and revenue of the auction. If the bidder attempts to acquire both licenses through bidding on the licenses individually, they might be forced to expose themselves to potential losses. The high number of bidder defaults on payments might, in part, be evidence of losses caused by the lack of package bidding. In response to these difficulties, the FCC plans to allow package bidding in future auctions (Federal Communications Commission 2002, Dunford et al. 2001). In particular, the FCC in its auction #31 for the upper 700 MHz band, affords bidders the ability to submit bids for packages of licenses. The particular design presented in this paper was developed prior to the FCC package auction design. Indeed one of the major features of the FCC design was clearly influenced by the pricing rules we developed herein.
Experimental Comparisons of Auction Designs
John O. Ledyard, Division of the Humanities and Social Sciences, California Institute of Technology,
David P. Porter, Economic Science Institute, Chapman University,
Antonio Rangel, Division of the Humanities and Social Sciences, California Institute of Technology
During the discussion and evaluation of proposals for the design of the Federal Communications Commission (FCC) mechanism to sell the spectrum, over 130 auctions were run under controlled conditions at Caltech for the National Telecommunications and Information Administration (NTIA), the FCC, and others. While these data were used in those debates, we do not intend to relive that process here. Instead, in this paper, we reexamine these data and try to extract some useful information for those who may, in the future, be involved in the difficult task of creating mechanisms to auction multiple items.
The two major design questions we can say something about are (1) should the items be auctioned off sequentially or simultaneously? and (2) should package bidding be allowed? Ourmain conclusion is that, over a very wide range of environments, package bidding mechanisms (weakly) dominate simultaneous mechanisms, which in turn (weakly) dominate sequential mechanisms. This conclusion is based on three observations derived from a close look at the data.
First, in environments with multiple items to be allocated, if those items are homogeneous and substitutes, then little coordination between buyers is needed and the only role of the mechanism is to sort bidders with high values from bidders with low values. Both the sequential and simultaneous mechanisms seem to work very well at finding efficient allocations in these “easy” environments.
Second, in environments with multiple items to be allocated, if those items are heterogeneous, then some coordination among bidders is necessary to achieve high-value allocations even if there are only low synergy values. Simultaneous auctions provide a first step at this coordination that sequential auctions might have difficulty in providing.
Third, in environments with heterogeneous goods exhibiting complementarities, significant coordination is required for an auction or allocation mechanism to perform well with respect to efficiency or revenue. Sequential auctions perform poorly. Simultaneity is clearly necessary but not sufficient to attain high efficiencies.
Combinatorial auctions enhance our ability to efficiently allocate multiple resources in complex economic environments. They explicitly allow buyers and sellers of goods and services to bid on packages of items with related values or costs. For example, “I bid $10 to buy 1 unit of item A and 2 units of item B, but I won't pay anything unless I get everything.” They also allow buyers, sellers and the auctioneer to impose logical constraints that limit the feasible set of auction allocations. For example, “I bid $12 to buy 2 units of item C OR $15 to buy 3 units of item D, but I don't want both.” Finally, they can handle functional relationships amongst bids or allocations, such as budget constraints or aggregation limits that allow many bids to be connected together. For example, “I won't spend more than a total of $35 on all my bids” or “This auction will allocate no more than a total of 7 units of items F, G and H.”
There are several reasons to prefer to have the bidding message space expanded beyond the simple space used for traditional single commodity auctions. As Bykowsky et al. (2000) point out, when values have strong complementarities, there is a danger of ‘financial exposure’ that results in losses to bidders if combinatorial bidding is not allowed. For example, in the case of complementary items such as airport take-off and landing times, the ability to reduce uncertainty to the bidder by allowing him to precisely declare his object of value, a cycle of slots for an entire daily flight pattern, is obvious: one component slot not acquired ruins the value of the flight cycle. In the same situation substitution possibilities would also be important to consider: if flight cycle A is not won, cycle B may be an appropriate though less valuable substitute for the crew and equipment available. Allocation inefficiencies due to financial exposure in noncombinatorial auctions have been frequently demonstrated in experiments beginning with Rassenti et al. (1982) (see also Porter (1999), Banks et al. (1989), Ledyard et al. (2002) and Kwasnika et al. (1998)).
The brain-derived neurotrophic factor (BDNF) Val66Met polymorphism Met allele exacerbates amyloid (Aβ) related decline in episodic memory (EM) and hippocampal volume (HV) over 36–54 months in preclinical Alzheimer's disease (AD). However, the extent to which Aβ+ and BDNF Val66Met is related to circulating markers of BDNF (e.g. serum) is unknown. We aimed to determine the effect of Aβ and the BDNF Val66Met polymorphism on levels of serum mBDNF, EM, and HV at baseline and over 18-months.
Non-demented older adults (n = 446) underwent Aβ neuroimaging and BDNF Val66Met genotyping. EM and HV were assessed at baseline and 18 months later. Fasted blood samples were obtained from each participant at baseline and at 18-month follow-up. Aβ PET neuroimaging was used to classify participants as Aβ– or Aβ+.
At baseline, Aβ+ adults showed worse EM impairment and lower serum mBDNF levels relative to Aβ- adults. BDNF Val66Met polymorphism did not affect serum mBDNF, EM, or HV at baseline. When considered over 18-months, compared to Aβ– Val homozygotes, Aβ+ Val homozygotes showed significant decline in EM and HV but not serum mBDNF. Similarly, compared to Aβ+ Val homozygotes, Aβ+ Met carriers showed significant decline in EM and HV over 18-months but showed no change in serum mBDNF.
While allelic variation in BDNF Val66Met may influence Aβ+ related neurodegeneration and memory loss over the short term, this is not related to serum mBDNF. Longer follow-up intervals may be required to further determine any relationships between serum mBDNF, EM, and HV in preclinical AD.
Field and greenhouse studies were conducted to evaluate selected PRE-applied herbicides for sprangletop control. In greenhouse studies, oxadiazon and dithiopyr provided excellent (> 89%) red sprangletop (L. filiformis) control. Pendimethalin and metolachlor + atrazine provided good (80–89%) to excellent control for 6 mo and 2 mo during studies 1 and 2, respectively. Isoxaben and atrazine provided poor (< 70%) control during both greenhouse studies. In field studies, good to excellent bearded sprangletop (L. fascicularis) control followed dithiopyr, pendimethalin, metolachlor, and metolachlor + atrazine treatments. Dithiopyr at 0.8 kg ai/ha provided best (> 95%) sprangletop control throughout the 6-mo testing period. Under field conditions, control was inconsistent following oxadiazon with good to excellent control during one study and poor control in another study. Better or equal control than metolachlor + atrazine followed dithiopyr, pendimethalin, and metolachlor alone treatments. Oxadiazon also provided similar or better control than metolachlor+atrazine in three of four studies. Over all studies, dithiopyr provided best sprangletop control, followed by metolachlor, metolachlor + atrazine, pendimethalin, and oxadiazon. Isoxaben and atrazine treatments provided poorest or inconsistent sprangletop control.
The anticipated release of EnlistTM cotton, corn, and soybean cultivars likely will increase the use of 2,4-D, raising concerns over potential injury to susceptible cotton. An experiment was conducted at 12 locations over 2013 and 2014 to determine the impact of 2,4-D at rates simulating drift (2 g ae ha−1) and tank contamination (40 g ae ha−1) on cotton during six different growth stages. Growth stages at application included four leaf (4-lf), nine leaf (9-lf), first bloom (FB), FB + 2 wk, FB + 4 wk, and FB + 6 wk. Locations were grouped according to percent yield loss compared to the nontreated check (NTC), with group I having the least yield loss and group III having the most. Epinasty from 2,4-D was more pronounced with applications during vegetative growth stages. Importantly, yield loss did not correlate with visual symptomology, but more closely followed effects on boll number. The contamination rate at 9-lf, FB, or FB + 2 wk had the greatest effect across locations, reducing the number of bolls per plant when compared to the NTC, with no effect when applied at FB + 4 wk or later. A reduction of boll number was not detectable with the drift rate except in group III when applied at the FB stage. Yield was influenced by 2,4-D rate and stage of cotton growth. Over all locations, loss in yield of greater than 20% occurred at 5 of 12 locations when the drift rate was applied between 4-lf and FB + 2 wk (highest impact at FB). For the contamination rate, yield loss was observed at all 12 locations; averaged over these locations yield loss ranged from 7 to 66% across all growth stages. Results suggest the greatest yield impact from 2,4-D occurs between 9-lf and FB + 2 wk, and the level of impact is influenced by 2,4-D rate, crop growth stage, and environmental conditions.
Bandelier National Monument (BNM) was created to protect an extraordinary inventory of archaeological resources carved in the Tshirege Member of the Bandelier Tuff. These include more than one thousand excavated chambers, called cavates, used for dwelling, storage, and textile production. The glass-rich tuffs at the base of the Tshirege Member are poorly consolidated and susceptible to erosion by wind, rain, and mechanical abrasion, with resultant loss of cultural material. However, rock surfaces develop protective weathering rinds that are resistant to erosion. Using optical microscopy, SEM-EDS, XRD, and electron microprobe analysis, we determined that this rind consists of clay and silt sediments colonized by lichens and other surface biota, accompanied by the precipitation of secondary minerals in the near-surface pore space. Scoping experiments focused on glass-organic acid interactions indicate that oxalic acid excreted by microbial crust constituents catalyzes biogeochemical reactions that lead to the preferential dissolution of Si, Al, and Fe components of the volcanic glass; these cations become available for precipitation of opal, and smectite and sepiolite clays. Enzyme assays that quantify biological activity at outcrop surfaces indicate that microbial populations initially thrive as they derive nutrients from the dissolution reactions of the glass, but activity starts to decline as precipitation of secondary minerals limits access to new sources of nutrients, so that alteration processes are self-limiting. As case hardening progresses, imbibition rates at the surface decrease, and the erosion resistance of the altered surfaces is substantially improved. This article presents summary results of research conducted over a period of five years to characterize the roles of lichens and other microflora in rind formation, and the resulting contributions to tuff stability. The interaction of lichens and other microflora with rock surfaces in archaeological sites and monuments is usually explored in terms of biodeterioration and consequent damage. However, this study shows that, under some circumstances, lichens and microflora provide a level of erosion protection to relatively porous and unconsolidated rock strata that outweighs their biodeteriorative effects.
To measure transmission frequencies and risk factors for household acquisition of community-associated and healthcare-associated (HA-) methicillin-resistant Staphylococcus aureus (MRSA).
Prospective cohort study from October 4, 2008, through December 3, 2012.
Seven acute care hospitals in or near Toronto, Canada.
Total of 99 MRSA-colonized or MRSA-infected case patients and 183 household contacts.
Baseline interviews were conducted, and surveillance cultures were collected monthly for 3 months from household members, pets, and 8 prespecified high-use environmental locations. Isolates underwent pulsed-field gel electrophoresis and staphylococcal cassette chromosome mec typing.
Overall, of 183 household contacts 89 (49%) were MRSA colonized, with 56 (31%) detected at baseline. MRSA transmission from index case to contacts negative at baseline occurred in 27 (40%) of 68 followed-up households. Strains were identical within households. The transmission risk for HA-MRSA was 39% compared with 40% (P=.95) for community-associated MRSA. HA-MRSA index cases were more likely to be older and not practice infection control measures (P=.002–.03). Household acquisition risk factors included requiring assistance and sharing bath towels (P=.001–.03). Environmental contamination was identified in 78 (79%) of 99 households and was more common in HA-MRSA households.
Household transmission of community-associated and HA-MRSA strains was common and the difference in transmission risk was not statistically significant.
This article demonstrates that some 180 of the 2,400 text glosses in the ‘Leiden Glossary’ derive from an epitome of the Etymologiae. A reconstruction of this lost source shows that it consisted of abbreviated entries from all twenty of Isidore's books, with selected books heavily glossed in Old English. Mirroring the encyclopedic scope of the Etymologiae, this seventh-century epitome was extensively excerpted by glossary-compilers and gave thousands of English words their first written form.
Multimedia interventions are increasingly used to deliver information in order to promote self-care among patients with degenerative conditions. We carried out a realist review of the literature to investigate how the characteristics of multimedia psychoeducational interventions combine with the contexts in which they are introduced to help or hinder their effectiveness in supporting self-care for patients with degenerative conditions.
Electronic databases (Medline, Science Direct, PSYCHinfo, EBSCO, and Embase) were searched in order to identify papers containing information on multimedia psychoeducational interventions. Using a realist review approach, we reviewed all relevant studies to identify theories that explained how the interventions work.
Ten papers were included in the review. All interventions sought to promote self-care behaviors among participants. We examined the development and content of the multimedia interventions and the impact of patient motivation and of the organizational context of implementation. We judged seven studies to be methodologically weak. All completed studies showed small effects in favor of the intervention.
Significance of Results:
Multimedia interventions may provide high-quality information in an accessible format, with the potential to promote self-care among patients with degenerative conditions, if the patient perceives the information as important and develops confidence about self-care. The evidence base is weak, so that research is needed to investigate effective modes of delivery at different resource levels. We recommend that developers consider how an intervention will reduce uncertainty and increase confidence in self-care, as well as the impact of the context in which it will be employed.