To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The Keyhole is an internationally recognised front-of-pack nutrition label, guiding consumers to healthier food options. It indicates products in accordance with specific criteria for dietary fats, sugars, fibres, salt and wholegrains. The objective of this study was to simulate the potential impact of the Keyhole on adolescents’ energy and nutrient intakes by modelling a shift from reported food intakes to foods meeting the Keyhole criteria.
Self-reported dietary intake data were derived from a cross-sectional survey. Multiple replacement scenarios were calculated, where foods meeting the Keyhole criteria replaced reported non-compliant foods with varying proportions of replacement.
Dietary survey ‘Riksmaten Adolescents 2016–2017’ in schools across Sweden.
A nationally representative sample of 3099 adolescents in school years 5, 8 and 11 (55 % girls).
Overall, replacement with foods meeting the Keyhole criteria led to more adolescents meeting nutrition recommendations. Largest median intake improvements were seen for wholegrains (+196 %), SFA (-13 %), PUFA (+17 %) and fibres (+15 %). Smallest improvements were seen for free sugars (-3 %) and salt (-2 %), partly explained by the ineligibility of main food sources of free sugars for the Keyhole, and non-inclusion of ready meals that are often high in salt. Most micronutrient intakes were stable or improved. Unintentional effects included decreases in vitamin A, MUFA and energy intakes. Largest potential improvements in fat and fibre sources were observed in the youngest age group.
A shift to Keyhole alternatives for everyday foods would improve adolescents’ nutrient intakes, even with smaller exchanges.
This article discusses the objectives of the Stone Age Man in Caves of the Tatra Mountains project, which aims to explain the mysterious absence of evidence for the Palaeolithic in the Tatra Mountains of Eastern Europe. We present preliminary work from Hučivá Cave, which demonstrates clear traces of Magdalenian settlement within this region.
Illegal killing of wildlife is a major conservation issue that, to be addressed effectively, requires insight into the drivers of human behaviour. Here we adapt an established socio-psychological model, the theory of planned behaviour, to explore reasons for hunting the Endangered Bewick's swan Cygnus columbianus bewickii in the European Russian Arctic, using responses from hunters to a questionnaire survey. Wider ecological, legal, recreational and economic motivations were also explored. Of 236 hunters who participated overall, 14% harboured intentions to hunt Bewick's swan. Behavioural intention was predicted by all components of the theory of planned behaviour, specifically: hunters' attitude towards the behaviour, perceived behavioural control (i.e. perceived capability of being able to perform the behaviour) and their subjective norms (perception of social expectations). The inclusion of attitude towards protective laws and descriptive norm (perception of whether other people perform the behaviour) increased the model's predictive power. Understanding attitudes towards protective laws can help guide the design of conservation measures that reduce non-compliance. We conclude that conservation interventions should target the socio-psychological conditions that influence hunters' attitudes, social norms and perceived behavioural control. These may include activities that build trust, encourage support for conservation, generate social pressure against poaching, use motivations to prompt change and strengthen peoples' confidence to act. This approach could be applied to inform the effective design, prioritization and targeting of interventions that improve compliance and reduce the illegal killing of wildlife.
Comparing knowledge with belief can go wrong in two dimensions: If the authors employ a wider notion of knowledge, then they do not compare like with like because they assume a narrow notion of belief. If they employ only a narrow notion of knowledge, then their claim is not supported by the evidence. Finally, we sketch a superior teleological view.
In two experimental studies, we tested the hypothesis that negative mood would hinder the revision of negative beliefs in response to unexpectedly positive information in depression, whereas positive mood was expected to enhance belief updating.
In study 1 (N = 101), we used a subclinical sample to compare the film-based induction of sad v. happy mood with a distraction control group. Subsequently, participants underwent a well-established paradigm to examine intra-individual changes in performance-related expectations after unexpectedly positive performance feedback. In study 2, we applied the belief-updating task from study 1 to an inpatient sample (N = 81) and induced sad v. happy mood via film-clips v. recall of autobiographic events.
The results of study 1 showed no significant group differences in belief updating; the severity of depressive symptoms was a negative predictor of belief revision, though, and there was a non-significant trend suggesting that the presence of sad mood hindered belief updating in the subgroup of participants with a diagnosed depressive episode. Study 2 revealed that participants updated their expectations significantly less in line with positive feedback when they underwent the induction of negative mood prior to feedback, relative to positive mood.
By indicating that the presence of negative mood can hinder the revision of negative beliefs in clinically depressed people, our findings suggest that learning from new experiences can be hampered if state negative mood is activated. Thus, interventions relying on learning from novel positive experiences should aim at reducing state negative mood in depression.
We summarize some of the past year's most important findings within climate change-related research. New research has improved our understanding of Earth's sensitivity to carbon dioxide, finds that permafrost thaw could release more carbon emissions than expected and that the uptake of carbon in tropical ecosystems is weakening. Adverse impacts on human society include increasing water shortages and impacts on mental health. Options for solutions emerge from rethinking economic models, rights-based litigation, strengthened governance systems and a new social contract. The disruption caused by COVID-19 could be seized as an opportunity for positive change, directing economic stimulus towards sustainable investments.
A synthesis is made of ten fields within climate science where there have been significant advances since mid-2019, through an expert elicitation process with broad disciplinary scope. Findings include: (1) a better understanding of equilibrium climate sensitivity; (2) abrupt thaw as an accelerator of carbon release from permafrost; (3) changes to global and regional land carbon sinks; (4) impacts of climate change on water crises, including equity perspectives; (5) adverse effects on mental health from climate change; (6) immediate effects on climate of the COVID-19 pandemic and requirements for recovery packages to deliver on the Paris Agreement; (7) suggested long-term changes to governance and a social contract to address climate change, learning from the current pandemic, (8) updated positive cost–benefit ratio and new perspectives on the potential for green growth in the short- and long-term perspective; (9) urban electrification as a strategy to move towards low-carbon energy systems and (10) rights-based litigation as an increasingly important method to address climate change, with recent clarifications on the legal standing and representation of future generations.
Social media summary
Stronger permafrost thaw, COVID-19 effects and growing mental health impacts among highlights of latest climate science.
Background: Contaminated surfaces within patient rooms and on shared equipment is a major driver of healthcare-acquired infections (HAIs). The emergence of Candida auris in the New York City metropolitan area, a multidrug-resistant fungus with extended environmental viability, has made a standardized assessment of cleaning protocols even more urgent for our multihospital academic health system. We therefore sought to create an environmental surveillance protocol to detect C. auris and to assess patient room contamination after discharge cleaning by different chemicals and methods, including touch-free application using an electrostatic sprayer. Surfaces disinfected using touch-free methods may not appear disinfected when assessed by fluorescent tracer dye or ATP bioluminescent assay. Methods: We focused on surfaces within the patient zone which are touched by the patient or healthcare personnel prior to contact with the patient. Our protocol sampled the over-bed table, call button, oxygen meter, privacy curtain, and bed frame using nylon-flocked swabs dipped in nonbacteriostatic sterile saline. We swabbed a 36-cm2 surface area on each sample location shortly after the room was disinfected, immediately inoculated the swab on a blood agar 5% TSA plate, and then incubated the plate for 24 hours at 36°C. The contamination with common environmental bacteria was calculated as CFU per plate over swabbed surface area and a cutoff of 2.5 CFU/cm2 was used to determine whether a surface passed inspection. Limited data exist on acceptable microbial limits for healthcare settings, but the aforementioned cutoff has been used in food preparation. Results: Over a year-long period, terminal cleaning had an overall fail rate of 6.5% for 413 surfaces swabbed. We used the protocol to compare the normal application of either peracetic acid/hydrogen peroxide or bleach using microfiber cloths to a new method using sodium dichloroisocyanurate (NaDCC) applied with microfiber cloths and electrostatic sprayers. The normal protocol had a fail rate of 9%, and NaDCC had a failure rate of 2.5%. The oxygen meter had the highest normal method failure rate (18.2%), whereas the curtain had the highest NaDCC method failure rate (11%). In addition, we swabbed 7 rooms previously occupied by C. auris–colonized patients for C. auris contamination of environmental surfaces, including the mobile medical equipment of the 4 patient care units that contained these rooms. We did not find any C. auris, and we continue data collection. Conclusions: A systematic environmental surveillance system is critical for healthcare systems to assess touch-free disinfection and identify MDRO contamination of surfaces.
In the present study, we identified the ectoparasite communities of red foxes in three regions of Poland that encompassed two endemic regions for the occurrence of Dermacentor reticulatus, as well as a region that is free of this tick species (‘gap’ area). Our study sites were selected to enable the role of foxes as hosts for juvenile (nest dwelling) and adult (exophilic) D. reticulatus ticks to be determined, and to assess their contribution to the spread of this important vector of Babesia canis. We compared also ectoparasite communities between adult foxes with those of fox cubs. Finally, we carried out a systematic search for subcutaneous ticks determining their prevalence and abundance. In 2016–2018, 366 adult foxes and 25 live-trapped cubs were examined for ectoparasites. Ectoparasites were identified based on morphological features, PCR amplification and sequencing. The total prevalence of ectoparasites was higher in cubs (68%) than in adults (62.8%). In adults, 15 parasite species were recorded, including four tick species, seven flea species, scabies, and one Anopluran species each in the genera Felicola and Lipoptena. In cubs, six ectoparasite species were found, including Ixodes kaiseri, a species not found in adults. Although Ixodes ricinus and D. reticulatus were the dominant tick species on adult foxes, no D. reticulatus ticks were found on cubs. Subcutaneous ticks were common (38%) and abundant in all areas. Molecular analysis of subcutaneous nodules allowed the identification of 17 I. ricinus and five D. reticulatus. In conclusion, red foxes play a minor role as hosts of D. reticulatus.
As discussed in Chapters 1 and 2, one of the central tenets of the HSCA 2012 was the desirability of increasing the involvement of GPs (and other clinicians) in the commissioning of services for their patients. This ideological commitment – based upon belief and founded, in part at least, upon an implicit denigration of managerial work (in order to increase control over the NHS and commissioners), had far-reaching consequences in the design of the reforms. For example, the initial separation of responsibility for commissioning primary care services from secondary and community services was deemed necessary because of the potential for conflicts of interest, whilst the creation of CCGs as ‘membership organisations’ had, as seen in Chapter 3, significant implications for their organisation and governance. The initial White Paper, ‘Equity and Excellence’ (Department of Health, 2010a: 9) was relatively non-specific about the expected benefits of clinical leadership of commissioning. It was argued that:
The headquarters of the NHS will not be in the Department of Health or the new NHS Commissioning Board but instead, power will be given to the front-line clinicians and patients. The headquarters will be in the consulting room and clinic. The Government will liberate the NHS from excessive bureaucratic and political control, and make it easier for professionals to do the right things for and with patients, to innovate and improve outcomes.
The document suggested that the proposals would: ‘liberate professionals and providers from top down control’; shift decision making closer to patients; enable better dialogue between primary and secondary care practitioners; and ensure that service development had real clinical involvement. However, the mechanisms underlying these perceived benefits were unstated. Furthermore, it was claimed that, whilst previous incarnations of GP-led commissioning (which in the UK go back to the creation of ‘GP fundholding’ in the 1990s) had delivered some benefits, these had been limited by the failure to give those involved complete autonomy and real budgets. The creation of CCGs, it was argued, would remedy these problems and ‘liberate’ clinicians to significantly improve care.
The wide-ranging reforms made to health and care systems in England, as part of the HSCA 2012, created an enormous shakeup of the way the public health function is delivered. Key public health responsibilities were transferred from the NHS to local government councils. In addition, PHE was established as the national agency for public health.
This chapter examines what these changes have meant for the commissioning of services to improve population health. Commissioning in relation to the health improvement function refers to the strategic planning and purchasing of services that could include smoking cessation, weight management and drug and alcohol services, public health services for children and young people, comprehensive sexual health services and campaigns, dental public health services and services to prevent cancer and long-term conditions.
The political backdrop
The government's goal was to develop a ‘public health service that achieves excellent results, unleashing innovation and liberating professional leadership’ (Department of Health, 2010b). There were a number of important themes demonstrated in the structural changes. First, they represented an attempt to enhance democratic accountability and challenge the old ‘command and control’ model. Within the wider context of the localism agenda, the relocation of public health functions was an attempt to ensure that local people made local decisions to improve the health of local populations. Second, the government was attempting to shift the focus from processes onto outcomes. A comprehensive set of indicators were developed within a ‘public health outcomes framework’, against which local public health systems would be assessed. This would enable transparency and an element of comparability between different local areas. Third, there was an attempt to take a ‘different’ (though not new) approach to public health – one that takes a ‘life course’ perspective, and that places importance on wider determinants of health, particularly in relation to people's socioeconomic contexts. Fourth, there was a focus on ensuring that decisions are based on the best possible evidence of what works – a key role for PHE. Fifth, there was an emphasis on efficiency, particularly with regard to being ‘joined up’ and streamlined. And finally, consistent with wider policy, there was a general push towards commissioning, and lead organisations being solely commissioning organisations.
Domestic courts enjoy generous attention in international political and legal climate change literature. As a result of the reluctance of national governments to pursue climate protection measures, courts are called on to enforce international climate goals. This article assesses two domestic climate change cases (the Thabametsi Case and the Vienna Airport Case) in the light of Anthea Roberts’ functional understanding of the role of domestic courts in international law. It argues that domestic courts play a pivotal role in linking international obligations of conduct with national obligations of result. This role depends on domestic contexts and, therefore, requires a comparative approach.
Research has revealed that negative expectations impact depressive symptoms. However, research on the change of dysfunctional expectations in depression is lacking so far. Therefore, the present research aimed to fill this gap by testing the hypothesis that people with the major depressive disorder (MDD), contrary to healthy individuals, maintain their expectations despite experiences that positively disconfirm expectations. Further, it was hypothesized that cognitive immunization (a cognitive reappraisal of the disconfirming evidence) is a mechanism underlying the persistence of expectations.
In Study 1, we compared individuals with MDD (N = 58) to healthy individuals (N = 59). Participants worked on the same performance test and received standardized feedback that either confirmed or disconfirmed their initial performance expectations. In Study 2, we investigated the effects of cognitive immunization on expectation change among 59 individuals reporting elevated levels of depression by varying the appraisal of expectation-disconfirming feedback.
Results from Study 1 show that in the expectation-disconfirming condition, healthy individuals changed their expectations, whereas individuals with MDD did not. No such difference between the two groups was found for expectation-confirming feedback. Results from Study 2 indicated that varying cognitive immunization impacted expectation change, thus suggesting a crucial role of cognitive immunization in expectation change.
These two studies indicated that individuals suffering from depression have more difficulties in changing their expectations after disconfirming experiences than do healthy individuals, and cognitive immunization might be a core mechanism underlying expectation persistence. Therefore, psychotherapeutic interventions should aim to inhibit cognitive immunization processes to enhance expectation change.
Youths with obsessive–compulsive disorder (OCD) experience severe distress and impaired functioning at school and at home. Critical cognitive domains for daily functioning and academic success are learning, memory, cognitive flexibility and goal-directed behavioural control. Performance in these important domains among teenagers with OCD was therefore investigated in this study.
A total of 36 youths with OCD and 36 healthy comparison subjects completed two memory tasks: Pattern Recognition Memory (PRM) and Paired Associates Learning (PAL); as well as the Intra-Extra Dimensional Set Shift (IED) task to quantitatively gauge learning as well as cognitive flexibility. A subset of 30 participants of each group also completed a Differential-Outcome Effect (DOE) task followed by a Slips-of-Action Task, designed to assess the balance of goal-directed and habitual behavioural control.
Adolescent OCD patients showed a significant learning and memory impairment. Compared with healthy comparison subjects, they made more errors on PRM and PAL and in the first stages of IED involving discrimination and reversal learning. Patients were also slower to learn about contingencies in the DOE task and were less sensitive to outcome devaluation, suggesting an impairment in goal-directed control.
This study advances the characterization of juvenile OCD. Patients demonstrated impairments in all learning and memory tasks. We also provide the first experimental evidence of impaired goal-directed control and lack of cognitive plasticity early in the development of OCD. The extent to which the impairments in these cognitive domains impact academic performance and symptom development warrants further investigation.
Consistently high-quality health care is expected throughout Europe while concurrently, financial resources of member states are decreasing. National Health Technology Assessment (HTA) institutes are informing evidence-based reimbursement decisions in the national context, leading to redundancies in HTA production and tying up limited resources. Since 2006, the European Union project, the European Network for HTA (EUnetHTA) is aiming at enhancing the efficient use of HTA resources and facilitating transnational collaboration. Our aim is to present previous experience in joint assessment of medical devices. Furthermore, possible benefits of European collaboration for stakeholders will be discussed.
Processes and challenges of the completed EUnetHTA Joint Action (JA) 2 are summarized and discussed. Benefits, aims and opportunities of the ongoing EUnetHTA JA 3 are described.
Six rapid assessments of medical devices, focusing on the assessment of effectiveness and safety, were published during EUnetHTA JA 2. Challenges in European medical device assessment encompass the choice of topics, the time point of assessments and the lack of European standards for systematic patient involvement. Characteristics of medical devices, like learning curves, call for monitoring them throughout their lifecycle.
The benefit of European collaboration for stakeholders is manifold: uncertainty with regard to actual added value of a technology is minimized through Early Dialogues; harmonized and transparent assessment processes increase the quality of reports; work division among HTA organizations allows a resource-efficient assessment of a bigger amount of technologies; patient involvement ensures consideration of patient relevant endpoints.
The importance of cross-border collaboration in HTA is shown in the continuation of the EUnetHTA project, which aims to sustainably strengthen international collaboration even after expiration of EU-funding.
European collaboration in medical device assessment can ensure cross-border health care and efficient cooperation of national health systems. The focus should be set on a wide implementation of jointly established methods and quality standards. The European collaboration can lead to a concrete benefit for various stakeholders.
The optimal balance between central governmental authority and the degree of autonomy of local public bodies is an enduring issue in public policy. The UK National Health Service is no exception, with NHS history, in part at least, a history of repeated cycles of centralisation and decentralisation of decision-making power. Most recently, a significant reorganisation of the NHS in 2012–13 was built around the creation of new and supposedly more autonomous commissioning organisations (Clinical Commissioning Groups – CCGs). Using Bossert's (1998) concept of ‘decision space’, we explored the experiences of local commissioners as they took on their new responsibilities. We interviewed commissioning staff from all of the CCGs in two regional health care ‘economies’, exploring their perceptions of autonomy and their experiences over time. We found significant early enthusiasm for, and perceptions of, increased autonomy tempered in the vertical dimension by increasingly onerous and prescriptive monitoring regimes, and in the horizontal dimension by the proliferation of overlapping networks, inter-organisational groups and relationships. We propose that, whatever the balance between central and local control that is adopted, complex public services require some sort of meso-level oversight from organisations able to ‘hold the ring’ between competing interests and to take a regional view of the needs of the local health system. This suggests that local organisational autonomy in such services will always be constrained.
Objectives: Many of the currently used health technologies have never been systematically assessed or are misused, overused or superseded. Therefore, they may be ineffective. Active identification of ineffectiveness in health care is gaining importance to facilitate best care for patients and optimal use of limited resources. The present research analyzed processes and experiences of programs for identifying ineffective health technologies. The goal of this study was to elucidate factors that facilitate implementation.
Methods: Based on an overview article, a systematic literature search and unsystematic hand-search were conducted. Further information was gained from international experts.
Results: Seven programs were identified that include identification, prioritization and assessment of ineffective health technologies and dissemination of recommendations. The programs are quite similar regarding their goals, target groups and criteria for identification and prioritization. Outputs, mainly HTA reports or lists, are mostly disseminated by means of the internet. Top–down and bottom–up programs both have benefits in terms of implementation of recommendations, either as binding guidelines and decisions or as nonbinding information for physicians and other stakeholders. Crucial facilitators of implementation are political will, transparent processes and broad stakeholder involvement focusing on physicians.
Conclusions: All programs can improve the quality of health care and enable cost reduction in supportive surrounding conditions. Physicians and patients must be continuously involved in the process of evaluating health technologies. Additionally, decision makers must support programs and translate recommendations into concrete actions.