To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Psychiatric prescribers typically assess adherence by patient or caregiver self-report. A new digital medicine (DM) technology provides objective data on adherence by using an ingestible event monitoring (IEM) sensor embedded within oral medication to track ingestion. Despite likely clinical benefit, adoption by prescribers will in part depend on attitudes toward and experience with digital health technology, learning style preference (LSP), and how the technology s utility and value are described.
is to identify attitudes, experiences, and proclivities toward DM platforms that may affect adoption of the IEM platform and provide direction on tailoring educational materials to maximize adoption. Methods A survey of prescribers treating seriously mentally ill patients was conducted to assess drivers/barriers to IEM adoption. Factor analysis was performed on 13 items representing prior experience with and attitudes toward DM. Factor scores were correlated with prescriber characteristics including attitude and experience with digital technologies, LSP, and level of focus on healthcare cost.
A total of 127 prescribers (56% female, 76% physicians, mean age 48.1yrs.) completed the survey. Over 90% agreed medication adherence is important, visits allow enough time to monitor adherence (84.1%), and tailoring treatment to level of adherence would be beneficial (92.9%). The majority (65.9%) preferred relying upon outcomes data as their learning style while 15.9% preferred opinion leader recommendations and 18.3% information about how the technology would affect practice efficiency. Factor analysis revealed four dimensions: Level of comfort with EHR; Concern over current ability to monitor medication adherence; Attitudes about value of DM applications; and Benefits vs cost of DM for payers. Women scored higher on attitudes about the value of digital applications (p<0.01). Providers who perceive non-adherence as costly, and those who believe DM could benefit providers and patients scored higher on the value of DM (p<.05). Those whose LSP focuses on improving efficiency and prescribers with a higher proportion of Medicaid/ uninsured patients displayed concern about their ability to monitor adherence (p<0.05). Willingness to be a Beta Test site for DM applications was positively correlated with concern about their ability to monitor adherence and attitudes about the value of DM (p <0.01).
Prescriber characteristics including LSP, focus on healthcare costs, and attitudes toward DM may be related to adoption of the IEM platform. Those with more Medicaid/ uninsured patients were more concerned about ability to monitor adherence while those focused-on cost and benefit to providers and patients viewed DM as part of a solution for managing outcomes and cost. Overall, LSP, patient panel size by payer type, and focus on healthcare cost containment should be considered when developing IEM provider training materials.
Otsuka Pharmaceutical Development & Commercialization, Inc.
People with CHD are at increased risk for executive functioning deficits. Meta-analyses of these measures in CHD patients compared to healthy controls have not been reported.
To examine differences in executive functions in individuals with CHD compared to healthy controls.
We performed a systematic review of publications from 1 January, 1986 to 15 June, 2020 indexed in PubMed, CINAHL, EMBASE, PsycInfo, Web of Science, and the Cochrane Library.
Inclusion criteria were (1) studies containing at least one executive function measure; (2) participants were over the age of three.
Data extraction and quality assessment were performed independently by two authors. We used a shifting unit-of-analysis approach and pooled data using a random effects model.
The search yielded 61,217 results. Twenty-eight studies met criteria. A total of 7789 people with CHD were compared with 8187 healthy controls. We found the following standardised mean differences: −0.628 (−0.726, −0.531) for cognitive flexibility and set shifting, −0.469 (−0.606, −0.333) for inhibition, −0.369 (−0.466, −0.273) for working memory, −0.334 (−0.546, −0.121) for planning/problem solving, −0.361 (−0.576, −0.147) for summary measures, and −0.444 (−0.614, −0.274) for reporter-based measures (p < 0.001).
Our analysis consisted of cross-sectional and observational studies. We could not quantify the effect of collinearity.
Individuals with CHD appear to have at least moderate deficits in executive functions. Given the growing population of people with CHD, more attention should be devoted to identifying executive dysfunction in this vulnerable group.
True to its free market origins, the EU Trade Mark Directive (TMD)1 allows any sign to be registered as long as it is acting as a badge of origin and as long as it does not fall into one of the absolute grounds for refusal of registration. Some bars to registration cannot be overcome, such as those prohibiting the registration of functional shapes or signs that offend against public morality. However, other signs, which are descriptive, non-distinctive or generic, while disallowed from immediate registration, can subsequently be registered provided they have acquired distinctiveness through use. The EU trademark regime thus allows for the registration of signs that under previous laws of member states might have been denied registration in the public interest. For instance, in the United Kingdom and in Germany, this category included signs that were distinctive but, the courts held, should be left free for others to use, such as geographical names.2 For the same reason, under its 1938 Trade Marks Act, the UK courts would not allow the registration of functional and non-functional shape marks even with acquired distinctiveness.
The inclusion of students with autism spectrum disorder (ASD) is increasing, but there have been no longitudinal studies of included students in Australia. Interview data reported in this study concern primary school children with ASD enrolled in mainstream classes in South Australia and New South Wales, Australia. In order to examine perceived facilitators and barriers to inclusion, parents, teachers, and principals were asked to comment on the facilitators and barriers to inclusion relevant to each child. Data are reported about 60 students, comprising a total of 305 parent interviews, 208 teacher interviews, and 227 principal interviews collected at 6-monthly intervals over 3.5 years. The most commonly mentioned facilitator was teacher practices. The most commonly mentioned barrier was intrinsic student factors. Other factors not directly controllable by school staff, such as resource limitations, were also commonly identified by principals and teachers. Parents were more likely to mention school- or teacher-related barriers. Many of the current findings were consistent with previous studies but some differences were noted, including limited reporting of sensory issues and bullying as barriers. There was little change in the pattern of facilitators and barriers identified by respondents over time. A number of implications for practice and directions for future research are discussed.
Even though sub-Saharan African women spend millions of person-hours per day fetching water and pounding grain, to date, few studies have rigorously assessed the energy expenditure costs of such domestic activities. As a result, most analyses that consider head-hauling water or hand pounding of grain with a mortar and pestle (pilão use) employ energy expenditure values derived from limited research. The current paper compares estimated energy expenditure values from heart rate monitors v. indirect calorimetry in order to understand some of the limitations with using such monitors to measure domestic activities.
This confirmation study estimates the metabolic equivalent of task (MET) value for head-hauling water and hand-pounding grain using both indirect calorimetry and heart rate monitors under laboratory conditions.
The study was conducted in Nampula, Mozambique.
Forty university students in Nampula city who recurrently engaged in water-fetching activities.
Including all participants, the mean MET value for head hauling 20 litres (20·5 kg, including container) of water (2·7 km/h, 0 % slope) was 4·3 (sd 0·9) and 3·7 (sd 1·2) for pilão use. Estimated energy expenditure predictions from a mixed model were found to correlate with observed energy expenditure (r2 0·68, r 0·82). Re-estimating the model with pilão use data excluded improved the fit substantially (r2 0·83, r 0·91).
The current study finds that heart rate monitors are suitable instruments for providing accurate quantification of energy expenditure for some domestic activities, such as head-hauling water, but are not appropriate for quantifying expenditures of other activities, such as hand-pounding grain.
Engagement of frontline staff, along with senior leadership, in competition-style healthcare-associated infection reduction efforts, combined with electronic clinical decision support tools, appeared to reduce antibiotic regimen initiations for urinary tract infections (P = .01). Mean monthly standardized infection and device utilization ratios also decreased (P < .003 and P < .0001, respectively).
Charlemagne's Empire in Global Perspective “A Great Wealth of Gold, Silver, and Even Gems”
[Charlemagne] was so deeply committed to assisting the poor spontaneously with charity … that he not only made the effort to give alms in his own land and kingdom, but even overseas in Syria, Egypt, and Africa. When he learned that the Christians in Jerusalem, Alexandria, and Carthage were living in poverty, he was moved by their impoverished condition and used to send money … He loved the church of St-Peter the Apostle in Rome more than all the other sacred and venerable places and showered its altars with a great wealth of gold, silver, and even gems. He [also] sent a vast number of gifts to the popes.
This evocative passage, taken from a contemporary biography of Charlemagne (r. 768– 814 CE), illuminates two key features of Charlemagne's empire. First, it illustrates connections between the Franks and the world around them, showing us a Carolingian polity with a wide, if not directly global, range of vision. Second, the passage, almost casually, intimates to us the great wealth Charlemagne had available to him to make possible such elaborate acts of generosity. Other evidence suggests that Einhard, while he overstated and got some details wrong, was not leading us astray. Michael McCormick has recently analysed a survey of religious institutions in the Holy Land compiled by Charlemagne's agents, men known as missi, in preparation for the sending of alms. McCormick has explored this remarkable document, which survives in a ninth-century rotulus (a parchment scroll), and its context, arguing that Charlemagne did indeed send gifts to the Holy Land, gifts that are also referred to in royal legislation. Moreover, the evidence of trade connections in the late eighth and early ninth centuries also makes clear the extent to which the Carolingians were in contact with the societies around them. Einhard's vision was long dismissed by scholars, who, working in a paradigm set by the Belgian historian Henri Pirenne, had tended to see the Carolingian empire as economically stagnant and isolated.
To develop and validate the Discrepancy-based Evidence for Loss of Thinking Abilities (DELTA) score. The DELTA score characterizes the strength of evidence for cognitive decline on a continuous spectrum using well-established psychometric principles for improving detection of cognitive changes.
DELTA score development used neuropsychological test scores from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) cohort (two tests each from Memory, Executive Function, and Language domains). We derived regression-based normative reference scores using age, gender, years of education, and word-reading ability from robust cognitively normal ADNI participants. Discrepancies between predicted and observed scores were used for calculating the DELTA score (range 0–15). We validated DELTA scores primarily against longitudinal Clinical Dementia Rating-Sum of Boxes (CDR-SOB) and Functional Activities Questionnaire (FAQ) scores (baseline assessment through Year 3) using linear mixed models and secondarily against cross-sectional Alzheimer’s biomarkers.
There were 1359 ADNI participants with calculable baseline DELTA scores (age 73.7 ± 7.1 years, 55.4% female, 100% white/Caucasian). Higher baseline DELTA scores (stronger evidence of cognitive decline) predicted higher baseline CDR-SOB (ΔR2 = .318) and faster rates of CDR-SOB increase over time (ΔR2 = .209). Longitudinal changes in DELTA scores tracked closely and in the same direction as CDR-SOB scores (fixed and random effects of mean + mean-centered DELTA, ΔR2 > .7). Results were similar for FAQ scores. High DELTA scores predicted higher PET-Aβ SUVr (ρ = 324), higher CSF-pTau/CSF-Aβ ratio (ρ = .460), and demonstrated PPV > .9 for positive Alzheimer’s disease biomarker classification.
Data support initial development and validation of the DELTA score through its associations with longitudinal functional changes and Alzheimer’s biomarkers. We provide several considerations for future research and include an automated scoring program for clinical use.
Forty years ago, Knut Fladmark (1979) argued that the Pacific Coast offered a viable alternative to the ice-free corridor model for the initial peopling of the Americas—one of the first to support a “coastal migration theory” that remained marginal for decades. Today, the pre-Clovis occupation at the Monte Verde site is widely accepted, several other pre-Clovis sites are well documented, investigations of terminal Pleistocene subaerial and submerged Pacific Coast landscapes have increased, and multiple lines of evidence are helping decode the nature of early human dispersals into the Americas. Misconceptions remain, however, about the state of knowledge, productivity, and deglaciation chronology of Pleistocene coastlines and possible technological connections around the Pacific Rim. We review current evidence for several significant clusters of early Pacific Coast archaeological sites in North and South America that include sites as old or older than Clovis. We argue that stemmed points, foliate points, and crescents (lunates) found around the Pacific Rim may corroborate genomic studies that support an early Pacific Coast dispersal route into the Americas. Still, much remains to be learned about the Pleistocene colonization of the Americas, and multiple working hypotheses are warranted.
Canadian hospitals were made aware of the risk of Mycobacterium chimaera infection associated with heater-cooler units (HCUs) through alerts issued by the US food and Drug Administration (FDA) and the US Centers for Disease Control and Prevention (CDC). In response, most hospitals conducted retrospective reviews for infections, informed exposed patients, and initiated a requirement for informed consent with HCU use.
Communication is the cornerstone of therapeutic relationships between nurses, children, young people and their families. Communication skills are foundational to the work we do in acute care and community settings (Levetown, 2008). As an Australian Registered Nurse, your relationship with children in your care is normally mediated through their family or carers, so the importance of communicating well with all members of the family unit cannot be under-estimated. Good communication develops the foundations for child and family-centred care (Lindly, Zuckerman & Mistry, 2017), a model of care that is deeply embedded in paediatric nursing practice and that is underpinned by the assumption that children, families and health professionals work in partnership, with each party having an equal voice (Shields, 2010). Poor communication generates fear, anxiety and stress, and is a leading cause of dissatisfaction with health services.
A child's way of communicating depends on a range of factors. This includes their chronological age in the first instance, but also achievement of developmental milestones. This development is influenced by their biology, temperament, family and wider environment. Nurses need to have a good understanding of the cognitive and communication stages of childhood development to develop a set of skills that will enable them to communicate effectively with children of all ages, as well as the adults in the family. In this chapter, important considerations for communicating with children will be presented, together with techniques needed to communicate effectively with children of different developmental stages and their families.
The child's voice in healthcare
Within the Australian healthcare system, children, young people and their families can expect to be treated with dignity and respect. The care they receive is family-centred – that is, the family unit is respected for its values and beliefs, including those relating to health and healthcare. Shared decision-making requires a commitment from the family as well as the nurse, and good communication skills are foundational to the success of such a model of care. Communicating with parents about the decisions they make regarding their child's health and healthcare may seem straightforward, but there is another element of family- centred care to which we must pay attention.
The American Academy of Neurology (AAN) updated their practice parameters in the evaluation of driving risk in dementia and developed a Caregiver Driving Safety Questionnaire, detailed in their original manuscript (Iverson Gronseth, Reger, Classen, Dubinsky, & Rizzo, 2010). They described four factors associated with decreased driving ability in dementia patients: history of crashes or citations, informant-reported concerns, reduced mileage, and aggressive driving.
An informant-reported AAN Caregiver Driving Safety Questionnaire was designed with these elements, and the current study was the first to explore the factor structure of this questionnaire. Additionally, we examined associations between these factors and cognitive and behavioral measures in patients with mild cognitive impairment or early Alzheimer's disease and their informants.
Exploratory factor analysis revealed a four-component structure, consistent with the theory behind the AAN scale composition. These four factor scores also were significantly associated with performance on cognitive screening instruments and informant reported behavioral dysfunction. Regressions revealed that behavioral dysfunction predicted caregiver concerns about driving safety beyond objective patient cognitive dysfunction.
In this first known quantitative exploration of the scale, our results support continued use of this scale in office driving safety assessments. Additionally, patient behavioral changes predicted caregiver concerns about driving safety over and above cognitive status, which suggests that caregivers may benefit from psychoeducation about cognitive factors that may negatively impact driving safety.
Mia is 33 years old and for the past 10 years she has suffered pain at the entrance to her vagina when having sex. She was very frightened when she arrived at the psychosexual clinic, didn't want to take off her coat, and sat with her bag on her knee ready to leave any minute. She was annoyed that her general practitioner (GP) had asked her to see a ‘head doctor’ and felt she just needs to be ‘sorted’ so that her partner can have sex with her. When the doctor told Mia that she was a gynaecologist, Mia burst into tears and told the doctor that she had been seen by six gynaecologists and had five operations so far: three to look into her womb and two to widen the entrance to her vagina, as well as several injections into her vagina. She had been given instruments to help her dilate her vaginal entrance, which she hated using and which caused her a lot of pain, even though she had been told the gel was an anaesthetic (so it wouldn't hurt). She also felt humiliated having to use them.
The pain she felt was so severe that she couldn't have sex with her partner who, although he loved her, had said he was no longer sure that his future is with her. They had stopped trying to achieve penetration as it was ‘hopeless’, and they were more like ‘brother and sister’ than romantic partners. She was extremely angry about the failure of her treatments to date, but remained convinced of a physical problem, saying it felt like there was a ‘wall that stops him from getting in’.
On the examination couch, she drew her knees up and the cover to her chin. The doctor told her that the examination could wait and that Mia was ‘in control’ of the situation. She relaxed and allowed a finger in. Then she asked, ‘What's it like … in there?’
The doctor asked if she would like to touch her vaginal area to find out. At first, Mia recoiled from the suggestion, but eventually she tentatively touched herself and remarked that it felt warm and safe.
Scales are widely used in psychiatric assessments following self-harm. Robust evidence for their diagnostic use is lacking.
To evaluate the performance of risk scales (Manchester Self-Harm Rule, ReACT Self-Harm Rule, SAD PERSONS scale, Modified SAD PERSONS scale, Barratt Impulsiveness Scale); and patient and clinician estimates of risk in identifying patients who repeat self-harm within 6 months.
A multisite prospective cohort study was conducted of adults aged 18 years and over referred to liaison psychiatry services following self-harm. Scale a priori cut-offs were evaluated using diagnostic accuracy statistics. The area under the curve (AUC) was used to determine optimal cut-offs and compare global accuracy.
In total, 483 episodes of self-harm were included in the study. The episode-based 6-month repetition rate was 30% (n = 145). Sensitivity ranged from 1% (95% CI 0–5) for the SAD PERSONS scale, to 97% (95% CI 93–99) for the Manchester Self-Harm Rule. Positive predictive values ranged from 13% (95% CI 2–47) for the Modified SAD PERSONS Scale to 47% (95% CI 41–53) for the clinician assessment of risk. The AUC ranged from 0.55 (95% CI 0.50–0.61) for the SAD PERSONS scale to 0.74 (95% CI 0.69–0.79) for the clinician global scale. The remaining scales performed significantly worse than clinician and patient estimates of risk (P < 0.001).
Risk scales following self-harm have limited clinical utility and may waste valuable resources. Most scales performed no better than clinician or patient ratings of risk. Some performed considerably worse. Positive predictive values were modest. In line with national guidelines, risk scales should not be used to determine patient management or predict self-harm.
Health care providers are on the forefront of delivering care and allocating resources during a disaster; however, very few are adequately trained to respond in these situations. Furthermore, there is a void in the literature regarding the specific care needs of patients with ventricular assist devices (VADs) in a disaster setting. This project aimed to develop an evidenced-based protocol to aid health care providers during the evacuation of patients with VADs during a disaster.
This is a qualitative study that used expert review, tabletop discussion, and a survey of health care professionals to develop and evaluate an evacuation protocol. The protocol was revised after each stage of review in order to reach a consensus document.
The project concluded with the finalization of a protocol which addresses evacuation and patient triage, and also includes an algorithm to determine which staff members should be evacuated with patients, transportation resources, evacuation documentation, and items patients need during evacuation. The protocol also addressed steps to be taken in the event that evacuation efforts fail and how to manage outpatient VAD patients seeking assistance.
This protocol provides guidance for the care of VAD patients in the event of a disaster and evacuation. Protocols such as this address difficult scenarios and should be created prior to a disaster to assist staff in making difficult decisions. These documents should be created using multi-disciplinary feedback via the consensus model as well as the Institute of Medicine (IOM; National Academy of Medicine; Washington, DC USA) “Crisis Standards of Care.”
DavisKJ, SuyamaJ, LinglerJ, BeachM. The Development of an Evacuation Protocol for Patients with Ventricular Assist Devices During a Disaster. Prehosp Disaster Med. 2017;32(3):333–338.