To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This textbook provides a concise introduction to geophysical data processing - many of the techniques associated with the general field of time series analysis - for advanced students, researchers, and professionals. The treatment begins with calculus before transitioning to discrete time series via the sampling theorem, discussions of aliasing, the use of complex sinusoids, the development of the discrete Fourier transform from the Fourier series, and an overview of linear digital filter types and descriptions. Aimed at senior undergraduate and graduate students in geophysics, environmental science, and engineering with no previous background in linear algebra, probability, statistics or Fourier transforms, this textbook draws scenarios and datasets from across the world of geophysics, and shows how data processing techniques can be applied to real-world problems using detailed examples, illustrations, and exercises. Exercises are mostly computational in nature and may be completed using MATLAB or a computing environment with similar capabilities.
Having access and skills to use social technology, i.e. social internet use, social media and social applications, are considered as being vital to online social connection. Whilst evidence exists around facilitators and barriers to general technology use, evidence is limited with regards to the motivators, skills and tangible offline benefits older technology users experience with social technology. Therefore, this study used a qualitative, exploratory method to understand older adults’ experiences of using social technology to connect with others. Semi-structured interviews were conducted with 20 older adults (65+ years) across England, Scotland and Wales. Despite having access to social technology for social connection, and using this technology regularly, multiple barriers impacted motivators and skills for use, namely perceived self-efficacy and fear, the culture of online communication, absence of social capital and physical functioning. Some of these barriers of social technology use are reminiscent of barriers of wider technology use and emphasise the importance of addressing these barriers for digital exclusion, as well as social connection. However, some of these barriers were specific to social technology use and should be considered when providing guidance or interventions to increase older adults’ online social connection. Social connection was a clear tangible outcome to social technology use, and individuals discussed the benefits of using social technology, particularly visual communication tools, for online connection.
To determine risk factors for carbapenemase-producing organisms (CPOs) and to determine the prognostic impact of CPOs.
A retrospective matched case–control study.
Inpatients across Scotland in 2010–2016 were included. Patients with a CPO were matched with 2 control groups by hospital, admission date, specimen type, and bacteria. One group comprised patients either infected or colonized with a non-CPO and the other group were general inpatients.
Conditional logistic regression models were used to identify risk factors for CPO infection and colonization, respectively. Mortality rates and length of postisolation hospitalization were compared between CPO and non-CPO patients.
In total, 70 CPO infection cases (with 210 general inpatient controls and 121 non-CPO controls) and 34 CPO colonization cases (with 102 general inpatient controls and 60 non-CPO controls) were identified. Risk factors for CPO infection versus general inpatients were prior hospital stay (adjusted odds ratio [aOR], 4.05; 95% confidence interval [CI], 1.52–10.78; P = .005), longer hospitalization (aOR, 1.07; 95% CI, 1.04–1.10; P < .001), longer intensive care unit (ICU) stay (aOR, 1.41; 95% CI, 1.01–1.98; P = .045), and immunodeficiency (aOR, 3.68; 95% CI, 1.16–11.66; P = .027). Risk factors for CPO colonization were prior high-dependency unit (HDU) stay (aOR, 11.46; 95% CI, 1.27–103.09; P = .030) and endocrine, nutritional, and metabolic (ENM) diseases (aOR, 3.41; 95% CI, 1.02–11.33; P = .046). Risk factors for CPO infection versus non-CPO infection were prolonged hospitalization (aOR, 1.02; 95% CI, 1.00–1.03; P = .038) and HDU stay (aOR, 1.13; 95% CI, 1.02–1.26; P = .024). No differences in mortality rates were detected between CPO and non-CPO patients. CPO infection was associated with longer hospital stay than non-CPO infection (P = .041).
A history of (prolonged) hospitalization, prolonged ICU or HDU stay; ENM diseases; and being immunocompromised increased risk for CPO. CPO infection was not associated with increased mortality but was associated with prolonged hospital stay.
The Cognitive Battery of the National Institutes of Health Toolbox (NIH-TB) is a collection of assessments that have been adapted and normed for administration across the lifespan and is increasingly used in large-scale population-level research. However, despite increasing adoption in longitudinal investigations of neurocognitive development, and growing recommendations that the Toolbox be used in clinical applications, little is known about the long-term temporal stability of the NIH-TB, particularly in youth.
The present study examined the long-term temporal reliability of the NIH-TB in a large cohort of youth (9–15 years-old) recruited across two data collection sites. Participants were invited to complete testing annually for 3 years.
Reliability was generally low-to-moderate, with intraclass correlation coefficients ranging between 0.31 and 0.76 for the full sample. There were multiple significant differences between sites, with one site generally exhibiting stronger temporal stability than the other.
Reliability of the NIH-TB Cognitive Battery was lower than expected given early work examining shorter test-retest intervals. Moreover, there were very few instances of tests meeting stability requirements for use in research; none of the tests exhibited adequate reliability for use in clinical applications. Reliability is paramount to establishing the validity of the tool, thus the constructs assessed by the NIH-TB may vary over time in youth. We recommend further refinement of the NIH-TB Cognitive Battery and its norming procedures for children before further adoption as a neuropsychological assessment. We also urge researchers who have already employed the NIH-TB in their studies to interpret their results with caution.
Background: With the emergence of antibiotic resistant threats and the need for appropriate antibiotic use, laboratory microbiology information is important to guide clinical decision making in nursing homes, where access to such data can be limited. Susceptibility data are necessary to inform antibiotic selection and to monitor changes in resistance patterns over time. To contribute to existing data that describe antibiotic resistance among nursing home residents, we summarized antibiotic susceptibility data from organisms commonly isolated from urine cultures collected as part of the CDC multistate, Emerging Infections Program (EIP) nursing home prevalence survey. Methods: In 2017, urine culture and antibiotic susceptibility data for selected organisms were retrospectively collected from nursing home residents’ medical records by trained EIP staff. Urine culture results reported as negative (no growth) or contaminated were excluded. Susceptibility results were recorded as susceptible, non-susceptible (resistant or intermediate), or not tested. The pooled mean percentage tested and percentage non-susceptible were calculated for selected antibiotic agents and classes using available data. Susceptibility data were analyzed for organisms with ≥20 isolates. The definition for multidrug-resistance (MDR) was based on the CDC and European Centre for Disease Prevention and Control’s interim standard definitions. Data were analyzed using SAS v 9.4 software. Results: Among 161 participating nursing homes and 15,276 residents, 300 residents (2.0%) had documentation of a urine culture at the time of the survey, and 229 (76.3%) were positive. Escherichia coli, Proteus mirabilis, Klebsiella spp, and Enterococcus spp represented 73.0% of all urine isolates (N = 278). There were 215 (77.3%) isolates with reported susceptibility data (Fig. 1). Of these, data were analyzed for 187 (87.0%) (Fig. 2). All isolates tested for carbapenems were susceptible. Fluoroquinolone non-susceptibility was most prevalent among E. coli (42.9%) and P. mirabilis (55.9%). Among Klebsiella spp, the highest percentages of non-susceptibility were observed for extended-spectrum cephalosporins and folate pathway inhibitors (25.0% each). Glycopeptide non-susceptibility was 10.0% for Enterococcus spp. The percentage of isolates classified as MDR ranged from 10.1% for E. coli to 14.7% for P. mirabilis. Conclusions: Substantial levels of non-susceptibility were observed for nursing home residents’ urine isolates, with 10% to 56% reported as non-susceptible to the antibiotics assessed. Non-susceptibility was highest for fluoroquinolones, an antibiotic class commonly used in nursing homes, and ≥ 10% of selected isolates were MDR. Our findings reinforce the importance of nursing homes using susceptibility data from laboratory service providers to guide antibiotic prescribing and to monitor levels of resistance.
Background: Antibiotics are among the most commonly prescribed drugs in nursing homes; urinary tract infections (UTIs) are a frequent indication. Although there is no gold standard for the diagnosis of UTIs, various criteria have been developed to inform and standardize nursing home prescribing decisions, with the goal of reducing unnecessary antibiotic prescribing. Using different published criteria designed to guide decisions on initiating treatment of UTIs (ie, symptomatic, catheter-associated, and uncomplicated cystitis), our objective was to assess the appropriateness of antibiotic prescribing among NH residents. Methods: In 2017, the CDC Emerging Infections Program (EIP) performed a prevalence survey of healthcare-associated infections and antibiotic use in 161 nursing homes from 10 states: California, Colorado, Connecticut, Georgia, Maryland, Minnesota, New Mexico, New York, Oregon, and Tennessee. EIP staff reviewed resident medical records to collect demographic and clinical information, infection signs, symptoms, and diagnostic testing documented on the day an antibiotic was initiated and 6 days prior. We applied 4 criteria to determine whether initiation of treatment for UTI was supported: (1) the Loeb minimum clinical criteria (Loeb); (2) the Suspected UTI Situation, Background, Assessment, and Recommendation tool (UTI SBAR tool); (3) adaptation of Infectious Diseases Society of America UTI treatment guidelines for nursing home residents (Crnich & Drinka); and (4) diagnostic criteria for uncomplicated cystitis (cystitis consensus) (Fig. 1). We calculated the percentage of residents for whom initiating UTI treatment was appropriate by these criteria. Results: Of 248 residents for whom UTI treatment was initiated in the nursing home, the median age was 79 years [IQR, 19], 63% were female, and 35% were admitted for postacute care. There was substantial variability in the percentage of residents with antibiotic initiation classified as appropriate by each of the criteria, ranging from 8% for the cystitis consensus, to 27% for Loeb, to 33% for the UTI SBAR tool, to 51% for Crnich and Drinka (Fig. 2). Conclusions: Appropriate initiation of UTI treatment among nursing home residents remained low regardless of criteria used. At best only half of antibiotic treatment met published prescribing criteria. Although insufficient documentation of infection signs, symptoms and testing may have contributed to the low percentages observed, adequate documentation in the medical record to support prescribing should be standard practice, as outlined in the CDC Core Elements of Antibiotic Stewardship for nursing homes. Standardized UTI prescribing criteria should be incorporated into nursing home stewardship activities to improve the assessment and documentation of symptomatic UTI and to reduce inappropriate antibiotic use.
OBJECTIVES/GOALS: BURRITO is an efficient strategy that provides full disclosure in the electronic medical record of a patient’s preference in real time. BURRITO uses printed materials only to inform patients and has a <50% rates of consent. We hypothesized that adding an informational video to the printed materials would increase donations. METHODS/STUDY POPULATION: This study was IRB-approved and was considered minimal risk. The BURRITO self-consent workflow process (Soares et. al, Biopreservation and Biobanking, IN PRINT) was developed in an outpatient cardiology clinic. In the same clinic, patients were randomized to receiving printed materials only (standard procedure) or the printed materials plus a 2.5-minute informational video (intervention) while waiting for the physician in the exam room. Randomization occurred at the level of the day in clinic. Patients were blinded to the nature of the study. Following the presentation of information, the patient’s decision on consent for donation was documented in the electronic record by ancillary clinical staff. Rates of consent were analyzed by a statistician not involved in the experiment and after completion of trial. RESULTS/ANTICIPATED RESULTS: Thirty-five clinic days were randomized to either intervention (17 days) or standard (18 days), and a total of 255 patients decided during their visit to either “opt-in” or “opt-out” to donating remnant biospecimens for future research. One hundred patients opted to defer deciding (28%). No significant demographic differences were noted between the study arms. The rate of consent was 73% vs. 58% in the intervention group and the control group, respectively (p-value = 0.014). This represents an increase in the odds of consenting with an informational video by 96% (OR = 1.96, 95% CI = 1.15 to 3.34). DISCUSSION/SIGNIFICANCE OF IMPACT: This is the first randomized trial to show that an informational video with printed materials is superior for when patients are self-consenting to opt-in for clinical remnant biospecimen donation. This result adds to the evidence that the BURRITO process plus video (BURRITOv) is an effective approach for biospecimen universal consenting.
Dietary restriction of fermentable oligosaccharides, disaccharides, monosaccharides and polyols (FODMAP) is clinically effective and a commonly utilised approach in the management of functional symptoms in irritable bowel syndrome. Despite this, the low FODMAP diet has a number of challenges: it can alter the gut microbiota; impact nutrient intake and diet quality; is complex to understand; requires the patient to be adequately supported to follow the diet accurately and safely; and lastly, not all patients respond to the diet. The current review highlights the evidence for the clinical effectiveness of the low FODMAP diet, but focusses on the challenges associated with the diet to the patient, health professionals and the wider healthcare service. Finally, the review discusses research findings and practical guidance for how these challenges can be minimised and mitigated. The low FODMAP diet is a useful management strategy for irritable bowel syndrome, with data from clinical trials suggesting a 50–80% response rate, and when administered appropriately, the challenges to implementing the diet can be overcome so that these outcomes can be realised effectively and safely in clinical practice.
Acute change in mental status (ACMS), defined by the Confusion Assessment Method, is used to identify infections in nursing home residents. A medical record review revealed that none of 15,276 residents had an ACMS documented. Using the revised McGeer criteria with a possible ACMS definition, we identified 296 residents and 21 additional infections. The use of a possible ACMS definition should be considered for retrospective nursing home infection surveillance.
Pregabalin is indicated for the treatment of GAD in adults in Europe. The efficacy and safety of pregabalin for the treatment of adults and elderly patients with GAD has been demonstrated in 6 of 7 short-term clinical trials of 4 to 8 weeks.
To characterise the long-term efficacy and safety of pregabalin in subjects with GAD.
Subjects were randomised to double-blind treatment with either high-dose pregabalin (450-600 mg/d), low-dose pregabalin (150-300 mg/d), or lorazepam (3-4 mg/d) for 3 months. Treatment was extended with drug or blinded placebo for a further 3 months.
At 3 months, mean change from baseline Hamilton Anxiety Rating Scale (HAM-A) for pregabalin high- and low-dose, and for lorazepam ranged from -16.0 to -17.4. Mean change from baseline Clinical Global Impression-Severity (CGI-S) scores ranged from -2.1 to -2.3 and mean CGI-Improvement (CGI-I) scores were 1.9 for each active treatment group. At 6 months, improvement was retained for all 3 active drug groups, even when switched to placebo. HAM-A and CGI-S change from baseline scores ranged from -14.9 to -19.0 and -2.0 to -2.5, respectively. Mean CGI-I scores ranged from 1.5 to 2.3. The most frequently reported adverse events were insomnia, fatigue, dizziness, headache, and somnolence.
Efficacy was observed at 3 months, with maintained improvement in anxiety symptoms over 6 months of treatment. These results are consistent with previously reported efficacy and safety trials of shorter duration with pregabalin and lorazepam in subjects with GAD.
Psychiatry in the UK has longstanding recruitment problems (1). Evidence suggests the positive effects of clinical attachments on attitudes towards psychiatry are often transient (2). We therefore created the Psychiatry Early Experience Programme (PEEP) where year 1 medical students are paired with psychiatry trainees and shadow them at work. Students will ideally remain in PEEP throughout medical school, providing consistent exposure to psychiatry and a broad experience of its subspecialties.
1. To present PEEP
2. To assess:
a. Students’ baseline attitudes to psychiatry
b. PEEPs’ impact on students’ attitudes to psychiatry
A prospective survey based cohort study of King’s College London medical students.
PEEP started in 2013. In this cohort all students that signed up were accepted.
Students’ attitudes towards psychiatry were assessed on recruitment using the ATP-30 questionnaire (3), and will be re-assessed annually.
127 students were recruited. Attitudes were positive overall. 73% listed psychiatry in their top three specialities. 95.3% agreed or strongly agreed that ‘psychiatric illness deserves at least as much attention as physical illness.’ 84.3% disagreed or strongly disagreed that ‘at times it is hard to think of psychiatrists as equal to other doctors.’
Baseline attitudes to psychiatry were positive. By March 2015 we aim to collect and analyse data on students’ attitudes after one year in PEEP. Through on-ongoing analysis of this and future cohorts, we aim to assess the impact of PEEP on improving attitudes to psychiatry and whether this will ultimately improve recruitment.
Pregabalin is indicated for the treatment of generalised anxiety disorder (GAD) in adults in Europe. When pregabalin is discontinued, a 1-week (minimum) taper is recommended to prevent potential discontinuation symptoms.
To evaluate whether a 1-week pregabalin taper, after 3 or 6 months of treatment, is associated with the development of discontinuation symptoms (including rebound anxiety) in subjects with GAD.
Subjects were randomised to double-blind treatment with low- (150-300 mg/d) or high-dose pregabalin (450-600 mg/d) or lorazepam (3-4 mg/d) for 3 months. After 3 months ~25% of subjects in each group (per the original randomisation) underwent a double-blind, 1-week taper, with substitution of placebo. The remaining subjects continued on active treatment for another 3 months and underwent the 1-week taper at 6 months.
Discontinuation after 3 months was associated with low mean changes in Physician Withdrawal Checklist (PWC) scores (range: +1.4 to +2.3) and Hamilton Anxiety Rating Scale (HAM A) scores (range: +0.9 to +2.3) for each pregabalin dose and lorazepam. Discontinuation after 6 months was associated with low mean changes in PWC scores (range: -1.0 to +3.0) and HAM A scores (range: -0.8 to +3.0) for all active drugs and placebo. Incidence of rebound anxiety during pregabalin taper was low and did not appear related to treatment dose or duration.
A 1-week taper following 3 or 6 months of pregabalin treatment was not associated with clinically meaningful discontinuation symptoms as evaluated by changes in the PWC and HAM A rating scales.
At Guy's King's and St Thomas’ School of Medicine, a unique initiative is the Psychiatry Early Experience Programme (PEEP), which allows students to shadow psychiatry trainees at work several times a year. The students’ attitudes towards psychiatry and the scheme are regularly assessed and initial results are already available.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
Only with the completion of the life cycles of Fasciola hepatica in 1883 and 30 years later those of Schistosoma japonicum (1913), Schistosoma haematobium and Schistosoma mansoni (1915) did research on schistosomiasis really get underway. One of the first papers by Cawston in 1918, describing attempts to establish the means of transmission of S. haematobium in Natal, South Africa, forms the historical perspective against which to judge where we are now. Molecular biology techniques have produced a much better definition of the complexity of the schistosome species and their snail hosts, but also revealed the extent of hybridization between human and animal schistosomes that may impact on parasite adaptability. While diagnostics have greatly improved, the ability to detect single worm pair infections routinely, still falls short of its goal. The introduction of praziquantel ~1982 has revolutionized the treatment of infected individuals and led directly to the mass drug administration programmes. In turn, the severe pathological consequences of high worm burdens have been minimized, and for S. haematobium infections the incidence of associated squamous cell carcinoma has been reduced. In comparison, the development of effective vaccines has yet to come to fruition. The elimination of schistosomiasis japonica from Japan shows what is possible, using multiple lines of approach, but the clear and present danger is that the whole edifice of schistosome control is balanced on the monotherapy of praziquantel, and the development of drug resistance could topple that.
It is an article of faith that organized interests represent members to elected officials making use of synchronized communication channels. Rarely, if at all, have researchers had access to multiple, internal, and external channels to test this notion. We mine a trove of nearly 2,500 emails the Family Research Council (FRC) sent to list subscribers from 2007 to 2018. Text tools allow us to depict message flexibility of the FRC. We then consider how internal and external messages may be linked by examining the issue content of emails in relation to press releases. Finally, we note the bills lobbied by FRC and the frequency these are mentioned in the internal email messages. Our findings are twofold. First, they support the conditional independence of communication channels in ways that appear to conform to the requisites of the different audiences: elected officials are likely mobilized by different signals than members are. Second, our evidence shows that the flexibility organized interests have in composing their communications can mean that different audiences are presented with considerably different political agendas. While FRC has significant sophisticated message flexibility, our data set indicates that such flexibility can raise serious concerns about good faith representation.
Psychotropic prescription rates continue to increase in the United States (USA). Few studies have investigated whether social-structural factors may play a role in psychotropic medication use independent of mental illness. Food insecurity is prevalent among people living with HIV in the USA and has been associated with poor mental health. We investigated whether food insecurity was associated with psychotropic medication use independent of the symptoms of depression and anxiety among women living with HIV in the USA.
We used cross-sectional data from the Women's Interagency HIV Study (WIHS), a nationwide cohort study. Food security (FS) was the primary explanatory variable, measured using the Household Food Security Survey Module. First, we used multivariable linear regressions to test whether FS was associated with symptoms of depression (Center for Epidemiologic Studies Depression [CESD] score), generalised anxiety disorder (GAD-7 score) and mental health-related quality of life (MOS-HIV Mental Health Summary score; MHS). Next, we examined associations of FS with the use of any psychotropic medications, including antidepressants, sedatives and antipsychotics, using multivariable logistic regressions adjusting for age, race/ethnicity, income, education and alcohol and substance use. In separate models, we additionally adjusted for symptoms of depression (CESD score) and anxiety (GAD-7 score).
Of the 905 women in the sample, two-thirds were African-American. Lower FS (i.e. worse food insecurity) was associated with greater symptoms of depression and anxiety in a dose–response relationship. For the psychotropic medication outcomes, marginal and low FS were associated with 2.06 (p < 0.001; 95% confidence interval [CI] = 1.36–3.13) and 1.99 (p < 0.01; 95% CI = 1.26–3.15) times higher odds of any psychotropic medication use, respectively, before adjusting for depression and anxiety. The association of very low FS with any psychotropic medication use was not statistically significant. A similar pattern was found for antidepressant and sedative use. After additionally adjusting for CESD and GAD-7 scores, marginal FS remained associated with 1.93 (p < 0.05; 95% CI = 1.16–3.19) times higher odds of any psychotropic medication use. Very low FS, conversely, was significantly associated with lower odds of antidepressant use (adjusted odds ratio = 0.42; p < 0.05; 95% CI = 0.19–0.96).
Marginal FS was associated with higher odds of using psychotropic medications independent of depression and anxiety, while very low FS was associated with lower odds. These complex findings may indicate that people experiencing very low FS face barriers to accessing mental health services, while those experiencing marginal FS who do access services are more likely to be prescribed psychotropic medications for distress arising from social and structural factors.
An asymptotic model is derived for the competitive diffusion-limited evaporation of multiple thin sessile droplets under the assumption that the droplets are well separated. Exact solutions of the model are obtained for a pair of and for a polygonal array of identical droplets, and the model is found to perform well even outside its formal range of validity, up to and including the limit of touching droplets. The shielding effect of droplets on each other is demonstrated, and the model is used to investigate the effect of this shielding on droplet evolutions and lifetimes, as well as on the coffee-ring effect. The theoretical predictions of the model are found to be in good agreement with recent experimental results for seven relatively closely spaced droplets, suggesting that the model could be a useful tool for studying a wide range of other droplet configurations.