To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Lithium was first found to have an acute antimanic effect in 1948 with further corroboration in the early 1950s. It took some time for lithium to become the standard treatment for relapse prevention in bipolar affective disorder. In this study, our aims were to examine the factors associated wtih the likelihood of maintaining lithium levels within the recommended therapeutic range and to look at the stability of lithium levels between blood tests. We examined this relation using clinical laboratory serum lithium test requesting data collected from three large UK centres, where the approach to managing patients with bipolar disorder and ordering lithium testing varied.
46,555 lithium rest requests in 3,371 individuals over 7 years were included from three UK centres. Using lithium results in four categories (<0.4 mmol/L; 0.40–0.79 mmol/L; 0.80–0.99 mmol/L; ≥1.0 mmol/L), we determined the proportion of instances where, on subsequent testing, lithium results remained in the same category or switched category. We then examined the association between testing interval and proportion remaining within target, and the effect of age, duration of lithium therapy and testing history.
For tests within the recommended range (0.40–0.99 mmol/L categories), 84.5% of subsequent tests remained within this range. Overall 3-monthly testing was associated with 90% of lithium results remaining within range compared with 85% at 6-monthly intervals. At all test intervals, lithium test result history in the previous 12-months was associated with the proportion of next test results on target (BNF/NICE criteria), with 90% remaining within range target after 6-months if all tests in the previous 12-months were on target. Age/duration of lithium therapy had no significant effect on lithium level stability. Levels within the 0.80–0.99 mmol/L category were linked to a higher probability of moving to the ≥1.0 mmol/L category (10%) than those in the 0.40–0.79 mmolL group (2%), irrespective of testing frequency. Thus prior history in relation to stability of lithium level in the previous 12 months is a predictor of future stability of lithium level.
We propose that, for those who achieve 12-months of lithium tests within the 0.40–0.79mmol/L range, it would be reasonable to increase the interval between tests to 6 months, irrespective of age, freeing up resource to focus on those less concordant with their lithium monitoring. Where lithium level is 0.80–0.99mmol/L test interval should remain at 3 months. This could reduce lithium test numbers by 15% and costs by ~$0.4 m p.a.
This study examined lithium results and requesting patterns over a 6-year period, and compared these to guidance.
Bipolar disorder is the 4th most common mental health condition, affecting ~1% of UK adults. Lithium is an effective treatment for prevention of relapse and hospital admission, and is recommended by NICE as a first-line treatment.
We have previously shown in other areas that laboratory testing patterns are highly variable with sub-optimal conformity to guidance.
Lithium requests received by Clinical Biochemistry Departments at the University Hospitals of North Midlands, Salford Royal Foundation Trust and Pennine Acute Hospitals from 2012–2018 were extracted from Laboratory Information and Management Systems (46,555 requests; 3,371 individuals). We categorised by request source, lithium concentration and re-test intervals.
Many lithium results were outside the NICE therapeutic window (0.6–0.99mmol/L); 49.3% were below the window and 6.1% were above the window (median [Li]:0.61mmol/L). A small percentage were found at the extremes (3.2% at <0.1mmol/L, 1.0% at >1.4mmol/L). Findings were comparable across all sites.
For requesting interval, there was a distinct peak at 12 weeks, consistent with guidance for those stabilised on lithium therapy. There was no peak evident at 6 months, as recommended for those <65 years old on unchanging therapy. There was a peak at 0–7 days, reflecting those requiring closer monitoring (e.g. treatment initiation or results suggesting toxicity).
However, 77.6% of tests were requested outside expected testing frequencies.
We showed: (a) lithium levels are often maintained at the lower end of the NICE recommended therapeutic range (and the BNF range: 0.4-1.0mmol/L); (b) patterns of lithium results and testing frequency are comparable across three sites with differing models of care; (c) re-test intervals demonstrate a noticeable peak at the recommended 3-monthly interval, but not at 6-monthly intervals; (d) Many tests were repeated outside these expected frequencies (contrary to NICE guidance).
This chapter rounds off Section 2. In it, one of the authors, Jonathan Montgomery, begins by highlighting his view of the recurrent themes that arise from all eight chapters in this section.
Then, one of the editors, Alex Haslam, responds by substantially agreeing with Jonathan Montgomery. However, Haslam takes the opportunity to clarify one of the points that Montgomery makes with the intention of drawing attention to a key issue that runs like an artery through the body of this book. This concerns the nature of personalised healthcare and how this should best be understood and delivered. Haslam cautions that, in the process of developing personalised care, we should avoid the temptation to reduce peoples’ maladies to their individual conditions.
Projections of Paleoindian range mobility in the late Pleistocene are typically inferred from straight-line distances between toolstone sources and sites where artifacts of these raw materials have been found. Often, however, these sourcing assessments are not based on geologic analysis, raising the issue of correct source ascription. If sites of similar age can be linked to a toolstone source through geologic study, and direct procurement of toolstone can be inferred, geographic information systems (GIS) modeling of travel routes between the source and those sites can reveal route segments of annual rounds and aspects of landscape use. In the Hudson Valley of eastern New York, Paleoindian peoples exploited Normanskill chert outcrops for toolstone during the late Pleistocene. Here, we combine X-ray fluorescence sourcing results that link Normanskill chert artifacts at Paleoindian sites to the West Athens Hill source outcrop in the Hudson Valley with GIS least cost path analysis to model seasonal pathways of late Pleistocene peoples in northeastern North America.
Defining a link between wind-tunnel settling chamber screens, flow quality and test section boundary-layer spanwise variation is necessary for accurate transition prediction. The aim of this work is to begin establishing this link. The computed, steady, laminar wake of a zither (screen model) with imperfect wire spacing is tracked through a contraction and into a model test section. The contraction converts the zither wake into streamwise vorticity which then creates spanwise variation (streaks) in the test-section boundary layer. The magnitude of the spanwise variation is sensitive to the zither open-area ratio and imperfections, but the observed wavelength is relatively insensitive to the zither wire spacing. Increased spanwise variation is attributed to large wavelength variation of drag across the zither, and not the coalescence of jets phenomena. The linear stability of the streaks is predicted using the parabolized stability equations with the
method. A standard deviation of zither wire position error of 38.1
m (15 % of wire diameter) for a zither of 50 % open-area ratio is found to suppress Tollmien–Schlichting wave growth significantly.
The redshifted 21cm line of neutral hydrogen (Hi), potentially observable at low radio frequencies (~50–200 MHz), should be a powerful probe of the physical conditions of the inter-galactic medium during Cosmic Dawn and the Epoch of Reionisation (EoR). The sky-averaged Hi signal is expected to be extremely weak (~100 mK) in comparison to the foreground of up to 104 K at the lowest frequencies of interest. The detection of such a weak signal requires an extremely stable, well characterised system and a good understanding of the foregrounds. Development of a nearly perfectly (~mK accuracy) calibrated total power radiometer system is essential for this type of experiment. We present the BIGHORNS (Broadband Instrument for Global HydrOgen ReioNisation Signal) experiment which was designed and built to detect the sky-averaged Hi signal from the EoR at low radio frequencies. The BIGHORNS system is a mobile total power radiometer, which can be deployed in any remote location in order to collect radio frequency interference (RFI) free data. The system was deployed in remote, radio quiet locations in Western Australia and low RFI sky data have been collected. We present a description of the system, its characteristics, details of data analysis, and calibration. We have identified multiple challenges to achieving the required measurement precision, which triggered two major improvements for the future system.
Working within a series of partnerships among an academic health center, local health departments (LHDs), and faith-based organizations (FBOs), we validated companion interventions to address community mental health planning and response challenges in public health emergency preparedness.
We implemented the project within the framework of an enhanced logic model and employed a multi-cohort, pre-test/post-test design to assess the outcomes of 1-day workshops in psychological first aid (PFA) and guided preparedness planning (GPP). The workshops were delivered to urban and rural communities in eastern and midwestern regions of the United States. Intervention effectiveness was based on changes in relevant knowledge, skills, and attitudes (KSAs) and on several behavioral indexes.
Significant improvements were observed in self-reported and objectively measured KSAs across all cohorts. Additionally, GPP teams proved capable of producing quality drafts of basic community disaster plans in 1 day, and PFA trainees confirmed upon follow-up that their training proved useful in real-world trauma contexts. We documented examples of policy and practice changes at the levels of local and state health departments.
Given appropriate guidance, LHDs and FBOs can implement an effective and potentially scalable model for promoting disaster mental health preparedness and community resilience, with implications for positive translational impact.(Disaster Med Public Health Preparedness. 2014;8:511-526)
British popular newspapers were fascinated by the terrible power of the nuclear bomb, and they devoted countless articles, editorials and cartoons to it. In so doing, they played a significant role in shaping the nuclear culture of the post-war period. Yet scholars have given little sustained attention to this rich seam of material. This article makes a contribution to remedying this major gap by offering an overview of the coverage of nuclear weaponry in the two most popular newspapers in Britain, the Daily Express and the Daily Mirror, in the period from 1945 to the early 1960s. Although both papers supported British possession of the bomb, claiming that it was essential for the maintenance of great-power status, their reporting was more complex and critical than the existing scholarship has tended to assume. This article argues that sceptical voices in the press often disrupted official narratives and that journalists emphasized the potential dangers involved in the nuclear arms race. Newspapers frequently highlighted, rather than downplayed, the horrors of the bomb: it was repeatedly portrayed as a ‘monster’ threatening the world.
Community disaster preparedness plans, particularly those with content that would mitigate the effects of psychological trauma on vulnerable rural populations, are often nonexistent or underdeveloped. The purpose of the study was to develop and evaluate a model of disaster mental health preparedness planning involving a partnership among three, key stakeholders in the public health system.
A one-group, post-test, quasi-experimental design was used to assess outcomes as a function of an intervention designated Guided Preparedness Planning (GPP). The setting was the eastern-, northern-, and mid-shore region of the state of Maryland. Partner participants were four local health departments (LHDs), 100 faith-based organizations (FBOs), and one academic health center (AHC)—the latter, collaborating entities of the Johns Hopkins University and the Johns Hopkins Health System. Individual participants were 178 community residents recruited from counties of the above-referenced geographic area. Effectiveness of GPP was based on post-intervention assessments of trainee knowledge, skills, and attitudes supportive of community disaster mental health planning. Inferences about the practicability (feasibility) of the model were drawn from pre-defined criteria for partner readiness, willingness, and ability to participate in the project. Additional aims of the study were to determine if LHD leaders would be willing and able to generate post-project strategies to perpetuate project-initiated government/faith planning alliances (sustainability), and to develop portable methods and materials to enhance model application and impact in other health jurisdictions (scalability).
The majority (95%) of the 178 lay citizens receiving the GPP intervention and submitting complete evaluations reported that planning-supportive objectives had been achieved. Moreover, all criteria for inferring model feasibility, sustainability, and scalability were met.
Within the span of a six-month period, LHDs, FBOs, and AHCs can work effectively to plan, implement, and evaluate what appears to be an effective, practical, and durable model of capacity building for public mental health emergency planning.
McCabeOL, PerryC, AzurM, TaylorHG, GwonH, MosleyA, SemonN, LinksJM. Guided Preparedness Planning with Lay Communities: Enhancing Capacity of Rural Emergency Response Through a Systems-Based Partnership. Prehosp Disaster Med. 2012;28(1):1-8.
The contributions to this volume identify ethical considerations implicated in genetic research on addiction and their translation into public policy. This chapter draws conclusions and proposes specific recommendations to guide research practice and help shape policy development based on this analysis. It should be noted that the views expressed in the summary and recommendations represent the views of the three authors of this chapter and may not necessarily be those of the authors of the rest of the chapters in this volume. The guidance is organized into four sections: (1) conceptualizing addiction; (2) the limitations of behavioral genetics in general and addiction genetics in particular; (3) research ethics; and (4) the translation and interpretation of addiction genetics research.
The manner in which addiction is conceptualized has been a controversial topic, with implications for the treatment of affected individuals and for social policy. Addiction is usually thought of as comprising a combination of three characteristics: physical dependence (e.g., physical symptoms of withdrawal and tolerance), psychological dependence (e.g., drug cravings), and harmful use. Most addicts will exhibit all three, but the manner in which they are manifested at different points can vary significantly. For example, it is possible to be addicted to largely nonharmful substances, i.e., caffeine. There are also forms of addiction that do not entail significant physical dependence: pathological gambling, for instance, is a harmful pattern of gambling that primarily involves psychological dependence. Conversely, substances may be used in ways that cause considerable acute physical and social harm in the absence of either psychological or physical dependence (e.g., alcohol-induced violence, drunk driving). It is also possible to develop a physical dependence to a substance in the absence of psychological dependency. For example, the administration of opiates by a physician or other third party, often in the treatment of pain, may lead to physical dependence (e.g., tolerance to the pharmacological effects of opiates necessitating increasing doses to maintain the same analgesic effects or adverse withdrawal symptoms following abrupt cessation of opiates), but no psychological dependence (i.e., no psychological need to use a drug post-detoxification).
The domestic dog is the reservoir host of Leishmania infantum, the causative agent of zoonotic visceral leishmaniasis endemic in Mediterranean Europe. Targeted control requires predictive risk maps of canine leishmaniasis (CanL), which are now explored. We databased 2187 published and unpublished surveys of CanL in southern Europe. A total of 947 western surveys met inclusion criteria for analysis, including serological identification of infection (504, 369 dogs tested 1971–2006). Seroprevalence was 23 2% overall (median 10%). Logistic regression models within a GIS framework identified the main environmental predictors of CanL seroprevalence in Portugal, Spain, France and Italy, or in France alone. A 10-fold cross-validation approach determined model capacity to predict point-values of seroprevalence and the correct seroprevalence class (<5%, 5–20%, >20%). Both the four-country and France-only models performed reasonably well for predicting correctly the <5% and >20% seroprevalence classes (AUC >0 70). However, the France-only model performed much better for France than the four-country model. The four-country model adequately predicted regions of CanL emergence in northern Italy (<5% seroprevalence). Both models poorly predicted intermediate point seroprevalences (5–20%) within regional foci, because surveys were biased towards known rural foci and Mediterranean bioclimates. Our recommendations for standardizing surveys would permit higher-resolution risk mapping.
Chunking is a powerful encoding strategy that significantly improves
working memory performance in normal young people.
To investigate chunking in patients with mild Alzheimer's disease and in
a control group of elderly people without cognitive impairment.
People with mild Alzheimer's disease (n = 28) were
recruited and divided according to Mini-Mental State Examination score
into mild and very mild disease groups. A control group of 15 elderly
individuals was also recruited. All participants performed digit and
spatial working memory tasks requiring either unstructured sequences or
structured sequences (which encourage chunking of information) to be
The control group and both disease groups performed significantly better
on structured trials of the digit working memory tasks, indicating
successful use of chunking strategies to improve verbal working memory
performance. The control and very mild disease groups also performed
significantly better on structured trials of the spatial task, whereas
those with mild disease demonstrated no significant difference between
the structured and unstructured spatial conditions.
The ability to use chunking as an encoding strategy to improve verbal
working memory performance is preserved at the mild stage of Alzheimer's
disease, whereas use of chunking to improve spatial working memory is
impaired by this stage. Simple training in the use of chunking might be a
beneficial therapeutic strategy to prolong working memory functioning in
patients at the earliest stage of Alzheimer's disease.
Dietary microparticles are non-biological, bacterial-sized particles. Endogenous sources are derived from intestinal Ca and phosphate secretion. Exogenous sources are mainly titanium dioxide (TiO2) and mixed silicates (Psil); they are resistant to degradation and accumulate in human Peyer's patch macrophages and there is some evidence that they exacerbate inflammation in Crohn's disease (CD). However, whether their intake differs between those with and without CD has not been studied. We aimed to identify dietary microparticle sources and intakes in subjects with and without CD. Patients with inactive CD and matched general practice-based controls (ninety-one per group) completed 7d food diaries. Intake data for dietary fibre and sucrose were compared as positive controls. All foods, pharmaceuticals and toothpastes were examined for microparticle content, and intakes of Ca and exogenous microparticles were compared between the two groups. Dietary intakes were significantly different between cases and controls for dietary fibre (12 (SD 5) v. 14 (sd 5) g/d; P=0.001) and sucrose (52 (sd 27) v. 45 (sd 18) g/d; P=0·04) but not for Ca. Estimated median TiO2 and Psil intakes (2·5 and 35mg/individual per d respectively, totalling 1012–1013 microparticles/individual per d) were broadly similar to per capita estimates and while there was wide variation in intakes between individuals there was no significant difference between subjects with CD and controls. Hence, if exposure to microparticles is associated with the inflammation of CD, then the present study rules out excess intake as the problem. Nonetheless, microparticle-containing foods have now been identified which allows a low-microparticle diet to be further assessed in CD.