We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Many responses to the resurgence of “majority nationalism” assume that that there is nothing normatively significant to the claims of national majorities. They accordingly seek to blunt the force those claims – or simply redescribe them in ways that do not account for majority nationalists’ central commitments or concerns. The very arguments used to ground minority rights in Kymlicka’s works appear to equally justify at least some majority cultural rights. Where a group possesses majority status by reasonably benign means and yet faces threats to its culture through the operation of, for example, globalization, Kymlickean arguments for minority rights grounded in cultural vulnerability equally justify majority cultural rights. In “Nationhood, Multiculturalism, and the Ethics of Membership,” Kymlicka presents justice-based reasons to think that majority rights claims should nonetheless be neutralized. Yet his arguments assume that majority and minority rights claims will only be made within the boundaries of a nation-state and that rights recognition in those circumstances will be a “zero sum” game. This assumption too is unwarranted in a globalized world. The issue of majority rights claims is at least more complicated than what Kymlicka allows.
Every country must allocate final decision-making authority over different issues/subjects within its boundaries. Historically, many scholars working on this topic implicitly assumed that identifying the features providing entities with justified claims for authority and the entities possessing those features would also identify which groups should have which powers (or vice versa). However, many candidate allocative principles select multiple entities as candidates for some sub-state authority and yet fail to explain which powers each should possess. Further work must explain which groups should possess which powers when and what to do when two groups can make equally-valid authority claims using the same principle. Subsidiarity, the principle under which authority should presumptively belong to the entity representing those ‘most affected’ by its exercise and capable of addressing underlying problems, is one of the few principles focused on identifying which groups should have which powers. Unfortunately, subsidiarity alone does not provide guidance on many issues/subjects. Useful subsidiarity-related guidance relies on balancing underlying justificatory interests, which do the real allocative work. Another allocative principle remains necessary. A deflationary account of subsidiarity’s allocative potential nonetheless provides insights into how to articulate a new principle and accounts of subsidiarity that can fulfill other moral roles.
‘Practical’ approaches to human rights hold that analysis of legal human rights must attend to the practice(s) of international human rights law and that the nature and justification of international human rights is best determined by attending to their role(s) in international human rights law’s system of normative practices, not analogous moral rights outside it. These core tenets plausibly explain the apparent normativity of international human rights law despite controversies about the status of many ‘rights’ in the ‘International Bill of Rights’. Yet plausible practical approaches require clear and compelling accounts of which practices qualify as human rights practices. Most existing accounts view ‘responses’ to claims made in the name of the international legal community as key to the identification of human rights. Activities by domestic governments and non-governmental actors qualify as relevant practices. While understandable, these ‘responsive’ accounts of practice create more problems than they solve. This work accordingly promotes a largely unexplored account on which ‘human rights practices’ are strictly defined by international legal doctrine. This ‘doctrinal’ account of practice is most likely to maintain practical approaches to human rights’ potential benefits without generating an unduly expansive rights register or adopting strong theoretical commitments about the nature of law.
Most hand hygiene (HH) intervention studies use a quasi-experimental design, are primarily uncontrolled before-and-after studies, or are controlled before-and-after studies with a nonequivalent control group. Well-funded studies with improved designs and HH interventions are needed.
Objectives:
To evaluate healthcare worker (HCW) HH compliance with alcohol-based hand rub (ABHR) through direct observation (human observer), 2 electronic technologies, a radio frequency identification (RFID) badge system, and an invasive device sensor.
Methods:
In our controlled experimental study, 2,269 observations were made over a 6-month period from July 1 to December 30, 2020, in a 4-bed intensive care unit. We compared HH compliance between a basic feedback loop system with RFID badges and an enhanced feedback loop system that utilized sensors on invasive devices.
Results:
Real-time feedback by wireless technology connected to a patient’s invasive device (enhanced feedback loop) resulted in a significant increase in HH compliance (69.5% in the enhanced group vs 59.1% in the basic group; P = .0001).
Conclusion:
An enhanced feedback loop system connected to invasive devices, providing real-time alerts to HCWs, is effective in improving HH compliance.
We investigated real-world vaccine effectiveness for Oxford-AstraZeneca (ChAdOx1) and CoronaVac against laboratory-confirmed severe acute respiratory coronavirus virus 2 (SARS-CoV-2) infection among healthcare workers (HCWs).
Methods:
We conducted a retrospective cohort study among HCWs (aged ≥18 years) working in a private healthcare system in Brazil between January 1, 2021 and August 3, 2021, to assess vaccine effectiveness. We calculated vaccine effectiveness as 1 − rate ratio (RR), with RR determined by adjusting Poisson models with the occurrence of SARS-CoV-2 infection as the outcome and the vaccination status as the main variable. We used the logarithmic link function and simple models adjusting for sex, age, and job types.
Results:
In total, 13,813 HCWs met the inclusion criteria for this analysis. Among them, 6,385 (46.2%) received the CoronaVac vaccine, 5,916 (42.8%) received the ChAdOx1 vaccine, and 1,512 (11.0%) were not vaccinated. Overall, COVID-19 occurred in 6% of unvaccinated HCWs, 3% of HCWs who received 2 doses of CoronaVac vaccine, and 0.7% of HCWs who received 2 doses of ChAdOx1 vaccine (P < .001). In the adjusted analyses, the estimated vaccine effectiveness rates were 51.3% for CoronaVac, and 88.1% for ChAdOx1 vaccine. Both vaccines reduced the number of hospitalizations, the length of hospital stay, and the need for mechanical ventilation. In addition, 19 SARS-CoV-2 samples from 19 HCWs were screened for mutations of interest. Of 19 samples, 18 were the γ (gamma) variant.
Conclusions:
Although both COVID-19 vaccines (viral vector and inactivated virus) can significantly prevent COVID-19 among HCWs, CoronaVac was much less effective. The COVID-19 vaccines were also effective against the dominant γ variant.
The role of fire in the management of degraded areas remains strongly debated. Here we experimentally compare removal and infestation of popcorn kernels (Zea mays L. – Poaceae) and açaí fruits (Euterpe oleracea Mart. – Arecaceae) in one burned and two unburned savanna habitats in the eastern Brazilian Amazon. In each habitat, a total of ten experimental units (five per seed type) were installed, each with three treatments: (1) open access, (2) vertebrate access, and (3) invertebrate access. Generalized linear models showed significant differences in both seed removal (P < 0.0001) and infestation (P < 0.0001) among seed type, habitats and access treatments. Burned savanna had the highest overall seed infestation rate (24.3%) and invertebrate access increased açaí seed infestation levels to 100% in the burned savanna. Increased levels of invertebrate seed infestation in burned savanna suggest that preparation burning may be of limited use for the management and restoration of such habitats in tropical regions.
COVID-19-related controversies concerning the allocation of scarce resources, travel restrictions, and physical distancing norms each raise a foundational question: How should authority, and thus responsibility, over healthcare and public health law and policy be allocated? Each controversy raises principles that support claims by traditional wielders of authority in “federal” countries, like federal and state governments, and less traditional entities, like cities and sub-state nations. No existing principle divides “healthcare and public law and policy” into units that can be allocated in intuitively compelling ways. This leads to puzzles concerning (a) the principles for justifiably allocating “powers” in these domains and (b) whether and how they change during “emergencies.” This work motivates the puzzles, explains why resolving them should be part of long-term responses to COVID-19, and outlines some initial COVID-19-related findings that shed light on justifiable authority allocation, emergencies, emergency powers, and the relationships between them.
Cognitive tests of inhibitory control show variable results for the differential diagnosis between behavioural variant of Frontotemporal Dementia (bvFTD) and Alzheimer’s disease (AD). We compared the diagnostic accuracies of tests of inhibitory control and of a behavioural questionnaire, to distinguish bvFTD from AD.
Methods:
Three groups of participants were enrolled: 27 bvFTD patients, 25 AD patients, and 24 healthy controls. Groups were matched for gender, education, and socio-economic level. Participants underwent a comprehensive neuropsychological assessment of inhibitory control, including Hayling Test, Stroop, the Five Digits Test (FDT) and the Delay Discounting Task (DDT). Caregivers completed the Barratt Impulsiveness Scale 11th version (BIS-11).
Results:
bvFTD and AD groups showed no difference in the tasks of inhibitory control, while the caregiver questionnaire revealed that bvFTD patients were significantly more impulsive (BIS-11: bvFTD 76.1+9.5, AD 62.9+13, p < .001).
Conclusions:
Neuropsychological tests of inhibitory control failed to distinguish bvFTD from AD. On the contrary, impulsivity caregiver-completed questionnaire provided good distinction between bvFTD and AD. These results highlight the current limits of cognitive measures of inhibitory control for the differential diagnosis between bvFTD and AD, whereas questionnaire information appears more reliable and in line with clinical diagnostics.
We prove two main results on Denjoy–Carleman classes: (1) a composite function theorem which asserts that a function $f(x)$ in a quasianalytic Denjoy–Carleman class ${\mathcal{Q}}_{M}$, which is formally composite with a generically submersive mapping $y=\unicode[STIX]{x1D711}(x)$ of class ${\mathcal{Q}}_{M}$, at a single given point in the source (or in the target) of $\unicode[STIX]{x1D711}$ can be written locally as $f=g\circ \unicode[STIX]{x1D711}$, where $g(y)$ belongs to a shifted Denjoy–Carleman class ${\mathcal{Q}}_{M^{(p)}}$; (2) a statement on a similar loss of regularity for functions definable in the $o$-minimal structure given by expansion of the real field by restricted functions of quasianalytic class ${\mathcal{Q}}_{M}$. Both results depend on an estimate for the regularity of a ${\mathcal{C}}^{\infty }$ solution $g$ of the equation $f=g\circ \unicode[STIX]{x1D711}$, with $f$ and $\unicode[STIX]{x1D711}$ as above. The composite function result depends also on a quasianalytic continuation theorem, which shows that the formal assumption at a given point in (1) propagates to a formal composition condition at every point in a neighbourhood.
Biological disease-modifying anti-rheumatic drugs (bDMARDs) have become firmly established in the management of patients with rheumatoid arthritis (RA), but some patients do not improve despite therapy. This study evaluated the predictors of effectiveness of the bDMARDs on a cohort of patients with rheumatoid arthritis (RA) in the Brazilian Public Health System.
METHODS:
RA individuals treated with bDMARDs, were included in the open prospective cohort study. The Clinical Disease Activity Index (CDAI) was used to assess the effectiveness comparing results at baseline and after 6 months of follow-up. The association between socio-demographic and clinical characteristics with the disease activity measured by the CDAI was also investigated. The bDMARDs was considered effective when the patient achieved remission or low disease activity and considered not effective when there was still moderate or high disease activity. Pearson's chi-square was applied for the univariate analysis to evaluate the association of effectiveness measured by the CDAI with the socio-demographic (gender, education, marital status and race) and clinical variables (type of drug, EuroQol (EQ)-5D and Health Assessment Questionnaire (HAQ)). Logistic regression was applied in the multivariate analysis of the variables that presented a p< .20 value during the univariate analysis.
RESULTS:
All 266 RA patients completed six months of follow-up. The most widely used bDMARDs was adalimumab (57.1 percent), with etanercept used by 22.2 percent, golimumab by 7.5 percent, abatacept by 4.5 percent, tocilizumab by 3.4 percent, infliximab by 2.6 percent, certolizumab by 1.5 percent, and rituximab by 1.1 percent. The bDMARDs reduced disease activity as measured by CDAI at six months of follow-up (p<.001). The percentage of patients achieving remission or low disease activity was 40.6 percent. bDMARDs were more effective in patients with better functionality (Odds Ratio, OR = 2.140 / 95 percent Confidence Interval, CI 1.219 - 3.756) at beginning of treatment and in patients who not had a previous bDMARDs (OR = 2.150 / 95 percent CI 1.144 - 4.042).
CONCLUSIONS:
In this real-world study, functionality and use of previous bDMARDs are predictors in patients with RA treated with bDMARDs.
Anti-tumor necrosis factor drugs (anti-TNF) are the last line of treatment for psoriatic arthritis (PsA) in the guideline of Brazilian Public Health System (SUS). Data of effectiveness of these drugs are scarce in the Latin American population. This study evaluated the effectiveness of the anti-TNF on a cohort of patients with PA in the SUS.
METHODS:
PsA patients treated with anti-TNF, were included in an open prospective cohort study. The Bath Ankylosing Spondylitis Disease Activity Index (BASDAI) and Clinical Disease Activity Index (CDAI) were used to assess the effectiveness at six months of follow-up. The anti-TNF was considered effective when the patient achieves scores of four or less measured for BASDAI or scores of ten or less for CDAI. Frequency distributions were compiled for the sociodemographic variables and mean and standard deviation (SD) was used for clinical variables. The paired Student t-test was established to evaluate the differences between baseline and 6 months evaluated for BASDAI and CDAI.
RESULTS:
Fifty-four patients with PsA completed six months of follow-up. The mean age of patients was 54.03 years (10.44) and the mean disease duration was 8.00 years (7.49). Furthermore, 50 percent of the patients were female, 61.1 percent white and 59.6 percent married. The most used anti-TNF was adalimumab (63.0 percent), followed by etanercept (20.4 percent) and infliximab (16.7 percent). The anti-TNF reduced disease activity measured by BASDAI and CDAI at six months of follow-up (p<.001). The percentage of patients achieving the effectiveness with anti-TNF was 61.1 percent measured by BASDAI and 53.7 percent by CDAI.
CONCLUSIONS:
Anti-TNF drugs demonstrated to be effective in more than half of patients at six months. This result highlighted the importance of the treatment with the anti-TNF drugs in the Brazilian population. Long-term data are needed to confirm these results.
There is growing evidence that those in socially peripheral and disadvantaged groups are more likely to have suffered an acquired brain injury (ABI), particularly a traumatic brain injury (TBI). What is less clear is whether this association is due to common risk factors for social exclusion and for brain injury, or whether each increases the risk of the other. Of course, the likely answer is that these factors are not mutually exclusive and that each factor or combination of factors plays some part in increasing this association. It is the aim of this chapter to examine the current evidence in order to understand this association better and to look at the implications of this analysis in terms of intervention for both the prevention and treatment of brain injury and for addressing social exclusion.
The most common form of ABI (brain injury that occurs after birth) is TBI. TBI is caused by physical impact to the head, either by sudden acceleration or sudden deceleration. TBI often leads to a characteristic pattern of deficits that makes the demands of life more difficult to meet. Common problems include cognitive deficits, such as memory, concentration and executive problems. Executive problems affect a person's ability to problem-solve, plan and organise goal-directed behaviour. In addition, certain non-cognitive neurobehavioural changes are common, such as reduced ability to regulate emotion, disinhibition, impulsivity and problems with social cognition. The latter affect a person's ability to read social cues and to moderate their behaviour so that it is appropriate to the situation.
It is easy to speculate about how these deficits could lead the individual to become marginalised by society as they affect many of the core skills required to conform to accepted patterns of behaviour. Cognitive problems make it difficult to maintain work in the open market. Other neurobehavioural deficits make it difficult to maintain social relationships, both at work and outside work. Such difficulties can therefore clearly have an impact on the likelihood of becoming homeless, falling foul of the law and failing to cope adequately with the demands of civilian life after time spent in the structured and regulated world of the military.
Little is known about predictors of recovery from bipolar depression.
Aims
We investigated affective instability (a pattern of frequent and large mood shifts over time) as a predictor of recovery from episodes of bipolar depression and as a moderator of response to psychosocial treatment for acute depression.
Method
A total of 252 out-patients with DSM-IV bipolar I or II disorder and who were depressed enrolled in the Systematic Treatment Enhancement Program for Bipolar Disorder (STEP-BD) and were randomised to one of three types of intensive psychotherapy for depression (n = 141) or a brief psychoeducational intervention (n = 111). All analyses were by intention-to-treat.
Results
Degree of instability of symptoms of depression and mania predicted a lower likelihood of recovery and longer time until recovery, independent of the concurrent effects of symptom severity. Affective instability did not moderate the effects of psychosocial treatment on recovery from depression.
Conclusions
Affective instability may be a clinically relevant characteristic that influences the course of bipolar depression.
Assessing changes in adolescents’ BMI over brief periods could contribute to detection of acute changes in weight status and prevention of overweight. The objective of this study was to analyse the BMI trajectory and the excessive weight gain of Brazilian adolescents over 3 years and the association with demographic and socio-economic factors. Data regarding the BMI of 1026 students aged between 13 and 19 years were analysed over 3 consecutive years (2010, 2011 and 2012) from the Adolescent Nutritional Assessment Longitudinal Study. Linear mixed effects models were used to assess the BMI trajectory according to the type of school attended (public or private), skin colour, socio-economic status and level of maternal schooling by sex. Associations between excessive weight gain and socio-economic variables were identified by calculation of OR. Boys attending private schools (β coefficient: 0·008; P=0·01), those with white skin (β coefficient: 0·007; P=0·04) and those whose mothers had >8 years of schooling (β coefficient: 0·009; P=0·02) experienced greater BMI increase than boys and girls in other groups. Boys in private schools also presented higher excessive weight gain compared with boys attending public schools (P=0·03). Boys attending private schools experienced greater BMI increase and excessive weight gain, indicating the need to develop specific policies for the prevention and reduction of overweight in this population.
Recent studies suggest that sand can serve as a vehicle for exposure of humans to pathogens at beach sites, resulting in increased health risks. Sampling for microorganisms in sand should therefore be considered for inclusion in regulatory programmes aimed at protecting recreational beach users from infectious disease. Here, we review the literature on pathogen levels in beach sand, and their potential for affecting human health. In an effort to provide specific recommendations for sand sampling programmes, we outline published guidelines for beach monitoring programmes, which are currently focused exclusively on measuring microbial levels in water. We also provide background on spatial distribution and temporal characteristics of microbes in sand, as these factors influence sampling programmes. First steps toward establishing a sand sampling programme include identifying appropriate beach sites and use of initial sanitary assessments to refine site selection. A tiered approach is recommended for monitoring. This approach would include the analysis of samples from many sites for faecal indicator organisms and other conventional analytes, while testing for specific pathogens and unconventional indicators is reserved for high-risk sites. Given the diversity of microbes found in sand, studies are urgently needed to identify the most significant aetiological agent of disease and to relate microbial measurements in sand to human health risk.
Numerical cognition is based on two components - number processing and calculation. Its development is influenced by biological, cognitive, educational, and cultural factors. The objectives of the present study were to: i) assess number processing and calculation in Brazilian children aged 7-12 years from public schools using the Zareki-R (Battery of neuropsychological tests for number processing and calculation in children, Revised; von Aster & Dellatolas, 2006) in order to obtain normative data for Portuguese speakers; ii) identify how environment, age, and gender influences the development of these mathematical skills; iii) investigate the construct validity of the Zareki-R by the contrast with the Arithmetic subtest of WISC-III. The sample included 172 children, both genders, divided in two groups: urban (N = 119) and rural (N = 53) assessed by the Zareki-R. Rural children presented lower scores in one aspect of number processing; children aged 7-8 years demonstrated an inferior global score than older; boys presented a superior performance in both number processing and calculation. Construct validity of Zareki-R was demonstrated by high to moderate correlations with Arithmetic subtest of WISC-III. The Zareki-R therefore is a suitable instrument to assess the development of mathematical skills, which is influenced by factors such as environment, age, and gender.
The provocation defence, which militates against full legal responsibility for unjustified killings in several common law jurisdictions, has been the subject of considerable controversy during recent decades. Much of the criticism focused on substantive legal issues. This article examines the philosophical bases for the defence in hopes of establishing a theoretical groundwork for future debate on the legal defence. The defence originated on desert bases and continues to be understood on those grounds. This article thus examines it in light of two dominant desert-based theories of punishment originating with Aristotle and Immanuel Kant respectively.
Ultimately, the best theory of punishment and the best theory of defence are provided by different approaches. The more plausible and robust Kantian theory of punishment can nonetheless be supplemented by the Aristotelean theory of defence as a continent sociological morality to create a more nuanced account of defence that better explains both excuses in general and the provocation defence in particular. From a substantive legal perspective, this position justifies continued use of the provocation defence in our imperfect legal order, but the partial excuse of provocation will not exist in the ideal legal order. An ideal political order will sufficiently control its citizens’ emotions such that the defence cannot be justified. A partial excuse of provocation is only necessary in the interim.