To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Advance Clinical and Translational Research (Advance-CTR) serves as a central hub to support and educate clinical and translational researchers in Rhode Island. Understanding barriers to clinical research in the state is the key to setting project aims and priorities.
We implemented a Group Concept Mapping exercise to characterize the views of researchers and administrators regarding how to increase the quality and quantity of clinical and translational research in their settings. Participants generated ideas in response to this prompt and rated each unique idea in terms of how important it was and feasible it seemed to them.
Participants generated 78 unique ideas, from which 9 key themes emerged (e.g., Building connections between researchers). Items rated highest in perceived importance and feasibility included providing seed grants for pilot projects, connecting researchers with common interests and networking opportunities. Implications of results are discussed.
The Group Concept Mapping exercise enabled our project leadership to better understand stakeholder-perceived priorities and to act on ideas and aims most relevant to researchers in the state. This method is well suited to translational research enterprises beyond Rhode Island when a participatory evaluation stance is desired.
We summarize some of the past year's most important findings within climate change-related research. New research has improved our understanding of Earth's sensitivity to carbon dioxide, finds that permafrost thaw could release more carbon emissions than expected and that the uptake of carbon in tropical ecosystems is weakening. Adverse impacts on human society include increasing water shortages and impacts on mental health. Options for solutions emerge from rethinking economic models, rights-based litigation, strengthened governance systems and a new social contract. The disruption caused by COVID-19 could be seized as an opportunity for positive change, directing economic stimulus towards sustainable investments.
A synthesis is made of ten fields within climate science where there have been significant advances since mid-2019, through an expert elicitation process with broad disciplinary scope. Findings include: (1) a better understanding of equilibrium climate sensitivity; (2) abrupt thaw as an accelerator of carbon release from permafrost; (3) changes to global and regional land carbon sinks; (4) impacts of climate change on water crises, including equity perspectives; (5) adverse effects on mental health from climate change; (6) immediate effects on climate of the COVID-19 pandemic and requirements for recovery packages to deliver on the Paris Agreement; (7) suggested long-term changes to governance and a social contract to address climate change, learning from the current pandemic, (8) updated positive cost–benefit ratio and new perspectives on the potential for green growth in the short- and long-term perspective; (9) urban electrification as a strategy to move towards low-carbon energy systems and (10) rights-based litigation as an increasingly important method to address climate change, with recent clarifications on the legal standing and representation of future generations.
Social media summary
Stronger permafrost thaw, COVID-19 effects and growing mental health impacts among highlights of latest climate science.
Antibiotic-resistant Gram-negative bacteraemias (GNB) are increasing in incidence. We aimed to investigate the impact of empirical antibiotic therapy on clinical outcomes by carrying out an observational 6-year cohort study of patients at a teaching hospital with community-onset Escherichia coli bacteraemia (ECB), Klebsiella pneumoniae bacteraemia (KPB) and Pseudomonas aeruginosa bacteraemia (PsAB). Antibiotic therapy was considered concordant if the organism was sensitive in vitro and discordant if resistant. We estimated the association between concordant vs. discordant empirical antibiotic therapy on odds of in-hospital death and ICU admission for KPB and ECB. Of 1380 patients, 1103 (79.9%) had ECB, 189 (13.7%) KPB and 88 (6.4%) PsAB. Discordant therapy was not associated with increased odds of either outcome. For ECB, severe illness and non-urinary source were associated with increased odds of both outcomes (OR of in-hospital death for non-urinary source 3.21, 95% CI 1.73–5.97). For KPB, discordant therapy was associated with in-hospital death on univariable but not multivariable analysis. Illness severity was associated with increased odds of both outcomes. These findings suggest broadening of therapy for low-risk patients with community-onset GNB is not warranted. Future research should focus on the relationship between patient outcomes, clinical factors, infection focus and causative organism and resistance profile.
Introduction: Renal colic is among the most painful conditions that patients experience. The main outcome determinants for patients with renal colic are stone size, location and hydronephrosis; however, little is known about the association of pain with these parameters. Our objective was to determine whether more severe pain is associated with larger stones, more proximal stones or more severe hydronephrosis, findings that might suggest the need for advanced imaging, hospitalization or early intervention. Methods: We used administrative data and structured chart review to study all adult emergency department (ED) patients in two cities with a renal colic diagnosis over one-year. Patients with missing imaging results or pain scores were excluded. Triage nurses recorded numeric rating scale (NRS) pain scores on arrival. We stratified patients into mild (NRS <4), moderate (NRS 4-7) and severe (NRS 8-10) pain groups, as per CTAS guidelines. Stone size (mm) and location (proximal, middle, distal ureter, or renal) were abstracted from imaging reports, while index admissions were determined from hospital discharge abstracts. We used multivariable linear regression to determine the association of arrival pain with stone characteristics and hydronephrosis severity (primary outcome), and we used multivariable logistic regression to determine the association of pain with index hospitalization (secondary outcome). We also performed a stratified analysis looking at ureteral vs. kidney (intrarenal) stones. Results: We studied 1053 patients, 66% male, with a mean age of 48 years. After controlling for patient and disease characteristics, we found no significant association between pain severity and stone size (b=−0.0004; 95%CI = -0.0015, 0.0008) or stone location (b = 0.0045; 95%CI: -0.020, 0.029). Nor did we find an association between pain and hydronephrosis severity (b = 0.016; 95%CI: -0.053, 0.022, p = 0.418). Stratified analyses using a Bonferroni correction for multiple comparisons revealed the same absence of associations in the kidney and ureteral stone subgroups. Arrival pain did not predict index admission (OR = 0.82, 95% CI: 0.59, 1.16). Conclusion: Arrival pain scores are not associated with stone size, stone location or hydronephrosis severity, and do not predict index visit hospitalization in ED patients with renal colic. Severe pain should motivate efforts to minimize treatment delays, but do not suggest the need to modify advanced imaging or admission decisions.
Introduction: Emergency department (ED) opioid prescribing has been linked to long-term use and dependence. Anecdotally, significant opioid practice variability exists between physicians and institutions, but this is poorly defined. Our objective was to collate and analyze multicenter data looking at predictors of ED opioid use and to identify potential areas for opioid stewardship. Methods: We linked administrative and computerized physician order entry (CPOE) data from all four ED's within our municipality over a one-year period. Eligible patients included those with a Canadian Triage and Acuity Scale (CTAS) pain complaint or an arrival numeric rating scale (NRS) pain score of greater than 3/10. Patients with missing demographic or chief complaint data were excluded. Multiple imputation was used for missing NRS pain scores. We performed descriptive analyses of opioid-treated and non-treated patients, followed by a multivariable logistic regression to identify predictors of ED opioid administration. Results: A total of 129,547 patients were studied. The mean age was 47.4 years and 55.4% were female. The median pain score was 6.6 in the no-opioid group and 8 in the opioid group. The most common pain categories were abdominal pain (23%), trauma (18.2%) and chest pain (15.3%). Overall, opioids were prescribed to 34% of patients. The most common CTAS score was CTAS 3 (44%), CTAS 1-2 42%) and CTAS 4-5 (13.9%). Multivariable predictors of opioid-use included the need for admission (adjusted OR 6.57; CI = 6.34-6.79), NRS pain score (aOR 1.24 per unit increase, CI 1.23-1.25), higher numerical CTAS score (aOR 0.89 per unit increase, CI 0.87-0.91), and chief complaints of back (aOR 7.69, CI 7.1-8.1), abdominal (aOR 5.9, CI 5.6-6.2), and flank pain (OR 3.8, CI 3.5-4). Oral opioids were prescribed in 39.8% of back pain presentations and 18.5% received IV opioids. Increasing age was a predictor but sex was not. There were significant institutional differences in opioid prescribing rates, with Hospital B being the least likely to prescribe opioids (aOR 0.82, CI 0.80-0.85) followed by Hospital C (aOR 0.83, CI 0.79-0.86) compared to the reference standard of Hospital A. Hospital D was most likely to prescribe opioids (aOR 1.32, CI 1.27-1.37). Conclusion: Predictors of ED opioid use were characterized using multicenter administrative data. Future research should seek to describe the physician- and site-level factors driving regional variation in opioid-based pain treatment.
The provision of support for people with autism spectrum disorder (ASD) within the community is improving as a consequence of policy and legislative changes. However, specialist services are not currently provided in prisons.
This aim of the study was to determine the extent of ASD and co-occurring mental health problems among prisoners. We tested the hypothesis that ASD traits would be unrecognised by prison staff and would be significantly associated with increased rates of anxiety, depression and suicidality.
ASD traits were measured among 240 prisoners in a resettlement prison in London, UK using the 20-item Autism Quotient (AQ-20). Anxiety, depression and suicidality were assessed using the Mini International Neuropsychiatric Interview (MINI).
There were 39 participants (16%) with an AQ-20 score ≥10; indicating significant autistic traits. Mental health data were available for 37 ‘high autistic trait’ participants and another 101 prisoners with no/low ASD traits. There was a significant positive association between AQ-20 and suicidality scores (r=.29, p=0.001). Participants with ASD traits had significantly higher suicidality scores (means=15.1 vs. 5, p= 0.001) and chi-square analysis showed that they were more likely to have a high suicidality rating (27% vs. 8%, p=0.003) than those without ASD traits. Moreover, those with ASD were significantly more likely to be experiencing a current episode of depression (30% vs. 6%, p<0.001) or Generalised Anxiety Disorder (GAD) (27% vs. 11% p=0.019).
Our initial data suggests that severity of ASD traits is a risk factor for suicidality and common mental health problems among prisoners.
ADHD in childhood is associated with development of negative psychosocial and behavioural outcomes in adults. Yet, relatively little is known about which childhood and adulthood factors are predictive of these outcomes and could be targets for effective interventions. To date follow-up studies have largely used clinical samples from the United States with children ascertained at baseline using broad criteria for ADHD including all clinical subtypes or the use of DSM III criteria.
To identify child and adult predictors of comorbid and psychosocial comorbid outcomes in ADHD in a UK sample of children with DSM-IV combined type ADHD.
One hundred and eighteen adolescents and young adults diagnosed with DSM-IV combined type ADHD in childhood were followed for an average of 6 years. Comorbid mental health problems, drug and alcohol use and police contact were compared for those with persistent ADHD, sub-threshold ADHD and population norms taken from the Adult Psychiatric Morbidity Study 2007. Predictors included ADHD symptomology and gender.
Persistent ADHD was associated with greater levels of anger, fatigue, sleep problems and anxiety compared to sub-threshold ADHD. Comorbid mental health problems were predicted by current symptoms of hyperactivity-impulsivity, but not by childhood ADHD severity. Both persistent and sub-threshold ADHD was associated with higher levels of drug use and police contact compared to population norms.
Young adults with a childhood diagnosis of ADHD showed increased rates of comorbid mental health problems, which were predicted by current levels of ADHD symptoms. This suggests the importance of the continuing treatment of ADHD throughout the transitional years and into adulthood. Drug use and police contact were more common in ADHD but were not predicted by ADHD severity in this sample.
Studies suggest that alcohol consumption and alcohol use disorders have distinct genetic backgrounds.
We examined whether polygenic risk scores (PRS) for consumption and problem subscales of the Alcohol Use Disorders Identification Test (AUDIT-C, AUDIT-P) in the UK Biobank (UKB; N = 121 630) correlate with alcohol outcomes in four independent samples: an ascertained cohort, the Collaborative Study on the Genetics of Alcoholism (COGA; N = 6850), and population-based cohorts: Avon Longitudinal Study of Parents and Children (ALSPAC; N = 5911), Generation Scotland (GS; N = 17 461), and an independent subset of UKB (N = 245 947). Regression models and survival analyses tested whether the PRS were associated with the alcohol-related outcomes.
In COGA, AUDIT-P PRS was associated with alcohol dependence, AUD symptom count, maximum drinks (R2 = 0.47–0.68%, p = 2.0 × 10−8–1.0 × 10−10), and increased likelihood of onset of alcohol dependence (hazard ratio = 1.15, p = 4.7 × 10−8); AUDIT-C PRS was not an independent predictor of any phenotype. In ALSPAC, the AUDIT-C PRS was associated with alcohol dependence (R2 = 0.96%, p = 4.8 × 10−6). In GS, AUDIT-C PRS was a better predictor of weekly alcohol use (R2 = 0.27%, p = 5.5 × 10−11), while AUDIT-P PRS was more associated with problem drinking (R2 = 0.40%, p = 9.0 × 10−7). Lastly, AUDIT-P PRS was associated with ICD-based alcohol-related disorders in the UKB subset (R2 = 0.18%, p < 2.0 × 10−16).
AUDIT-P PRS was associated with a range of alcohol-related phenotypes across population-based and ascertained cohorts, while AUDIT-C PRS showed less utility in the ascertained cohort. We show that AUDIT-P is genetically correlated with both use and misuse and demonstrate the influence of ascertainment schemes on PRS analyses.
The flat oyster Ostrea edulis has declined significantly in European waters since the 1850s as a result of anthropogenic activity. Ostrea edulis was designated a UK Biodiversity Action Plan Species and Habitat in 1995, and as a Feature of Conservation Importance (FOCI) within the UK Marine & Coastal Access Act 2009. To promote the recovery of oyster beds, a greater understanding of its abundance and distribution is required. Distribution of O. edulis across the proposed Blackwater, Crouch, Roach and Colne MCZ in Essex was determined between 2008 and 2012. Ostrea edulis were present in four estuary zones; with highest sample abundance in the Blackwater and Ray Sand zones. Size structure of populations varied, with the Ray Sand and Colne zones showing a significant lack of individuals with shell height <39 mm. Ostrea edulis occurred in highest number on shell substratum, followed by silty sediments. There were no significant associations between O. edulis abundance or size structure with water column Chl a, suspended solids, oxygen, nitrate or ammonium concentrations, temperature or pH. Highest abundance and most equitable population shell-size distribution for O. edulis were located within, or adjacent to, actively managed aquaculture zones. This suggests that traditional seabed management contributed to the maintenance or recovery of the species of conservation concern. Demonstration that the Essex estuaries were a stronghold for Ostrea edulis in the southern North sea area led to the designation of the Blackwater, Crouch, Roach and Colne estuaries Marine Conservation Zone in 2013.
Apex predators play a critical role in maintaining the health of ecosystems but are highly susceptible to habitat degradation and loss caused by land-use changes, and to anthropogenic mortality. The leopard Panthera pardus is the last free-roaming large carnivore in the Western Cape province, South Africa. During 2011–2015, we carried out a camera-trap survey across three regions covering c. 30,000 km2 of the Western Cape. Our survey comprised 151 camera sites sampling nearly 14,000 camera-trap nights, resulting in the identification of 71 individuals. We used two spatially explicit capture–recapture methods (R programmes secr and SPACECAP) to provide a comprehensive density analysis capable of incorporating environmental and anthropogenic factors. Leopard density was estimated to be 0.35 and 1.18 leopards/100 km2, using secr and SPACECAP, respectively. Leopard population size was predicted to be 102–345 individuals for our three study regions. With these estimates and the predicted available leopard habitat for the province, we extrapolated that the Western Cape supports an estimated 175–588 individuals. Providing a comprehensive baseline population density estimate is critical to understanding population dynamics across a mixed landscape and helping to determine the most appropriate conservation actions. Spatially explicit capture–recapture methods are unbiased by edge effects and superior to traditional capture–mark–recapture methods when estimating animal densities. We therefore recommend further utilization of robust spatial methods as they continue to be advanced.
Introduction: Emergency Department (ED) opioid prescribing has been linked to long-term use and dependence. Small packets of opioid medications are sometimes prescribed at discharge, i.e. ‘To-Go’, in an attempt to treat pain but avoid unintended consequences. The extent of this practice and its associated risks are not fully understood. This study's objective was to describe the use of ‘To-Go’ opioids in a large urban center. Methods: Multicenter linked administrative databases were used to recruit an observational cohort. The referral population was comprised of all patients discharged from a Calgary ED in 2016 (four hospitals) with an arrival pain score greater than 0. We first described this population and then performed a multivariable analysis to assess for predictors of ‘To-Go’ opioids. ‘To-Go’ opioids were either Tylenol-Codeine or Tylenol-Oxycodone. Results: A total of 88,855 patients were recruited. The majority were female (57%) and the average age was 44.5 yrs. Abdominal pain was the most frequent complaint (22.1%) followed by extremity (18.3%) and cardiac pain (8.0%). Overall, 2,736 patients (3.1%) received an opioid ‘To-Go’ with significant variation in prescribing rates across hospitals (1.8-5% Chi2 p < 0.05). Logistic regression (covariates: age, sex, CTAS, pain score, type of pain, hospital, ED opioid, length of stay) revealed that receiving an opioid (IV or PO) prior to discharge was the strongest predictor of ‘To-Go’ opioid (OR 6.4 [5.9-7.0]). Hospital (OR 1.4 [1.3-1.4]) and male sex (OR 1.2 [1.1-1.3]) also emerged as predictors, whereas age over 65 decreased the odds of ‘To-Go’ opioid (OR 0.8 [0.6-0.9]). Hospital-specific ORs ranged from 1.3-2.7. Conclusion: In comparable patient populations some hospitals are more likely than others to provide a short course of opioids at discharge. This difference is not explained by patient demographics, pain profiles, or medications prior to discharge. The reasons for this variation are unclear but it underscores the need to determine the risks of ED opioid exposures and develop clear evidence-based prescribing guidelines.
Patients with chronic obstructive pulmonary disease (COPD) who experience acute exacerbations usually require treatment with oral steroids or antibiotics, depending on the etiology of the exacerbation. Current management is based on clinician's assessment and judgement, which lacks diagnostic accuracy and results in overtreatment. A test to guide these decisions in primary care is in development. We developed an early decision model to evaluate the cost-effectiveness of this treatment stratification test in the primary care setting in the United Kingdom.
A combined decision tree and Markov model was developed of COPD progression and the exacerbation care pathway. Sensitivity analysis was carried out to guide technology development and inform evidence generation requirements.
The base case test strategy cost GBP 423 (USD 542) less and resulted in a health gain of 0.15 quality-adjusted life-years per patient compared with not testing. Testing reduced antibiotic prescriptions by 30 percent, potentially lowering the risk of antimicrobial resistance developing. In sensitivity analysis, the result depended on the clinical effects of treating patients according to the test result, as opposed to treating according to clinical judgement alone, for which there is limited evidence. The results were less sensitive to the accuracy of the test.
Testing may be cost-saving in primary care, but this requires robust evidence on whether test-guided treatment is effective. High quality evidence on the clinical utility of testing is required for early modeling of diagnostic tests generally.
Introduction: The 72-hr unscheduled return visit (URV) of an emergency department (ED) patient is often used as a key performance indicator in Emergency Medicine. Patients with unscheduled return visits and admission to hospital (URVA) may represent a distinct subgroup of URVs compared to unscheduled return visits with no admission (URVNA). Methods: A retrospective cohort study of all 72-hr URVs in adults across nine EDs in the Edmonton Zone (EZ) over a one-year period (Jan 1 2015 Dec 31 2015) was performed using ED information system data. URVA and URVNA populations were compared and a multivariable analysis identified predictors of URVA. Results: Analysis of 40,870 total URV records, including 3,363 URVAs, revealed predictors of URVA on the index visit including older age (>65 yrs, OR 3.6), fewer annual ED visits (<4 visits, OR 2.0), higher disease acuity (CTAS 2, OR 2.6), gastrointestinal presenting complaint (OR 2.2), presenting to a large referral hospital (OR 1.4), and more hours spent in the ED (>12 hours, OR 2.0). A decrease in CTAS score (increase in disease acuity) upon return visit was also a risk factor (-1 CTAS level, OR 2.6). ED crowding at the index visit, as indicated by occupancy level, was not a predictor. Conclusion: We demonstrate that URVA patients comprise a distinct subgroup of 72-hr URVs across an entire health region. Risk factors for URVA are present at the index visit suggesting that patients at high risk for URVA may be identifiable prior to admission.
The aim of this study was to describe patient level costing methods and develop a database of healthcare resource use and cost in patients with AHF receiving ventricular assist device (VAD) therapy.
Patient level micro-costing was used to identify documented activity in the years preceding and following VAD implantation, and preceding heart transplant for a cohort of seventy-seven consecutive patients listed for heart transplantation (2009–12). Clinician interviews verified activity, established time resource required for each activity, and added additional undocumented activities. Costs were sourced from the general ledger, salary, stock price, pharmacy formulary data, and from national medical benefits and prostheses lists. Linked administrative data analyses of activity external to the implanting institution, used National Weighted Activity Units (NWAU), 2014 efficient price, and admission complexity cost weights and were compared with micro-costed data for the implanting admission.
The database produced includes patient level activity and costs associated with the seventy-seven patients across thirteen resource areas including hospital activity external to the implanting center. The median cost of the implanting admission using linked administrative data was $246,839 (interquartile range [IQR] $246,839–$271,743), versus $270,716 (IQR $211,740–$378,482) for the institutional micro-costing (p = .08).
Linked administrative data provides a useful alternative for imputing costs external to the implanting center, and combined with institutional data can illuminate both the pathways to transplant referral and the hospital activity generated by patients experiencing the terminal phases of heart failure in the year before transplant, cf-VAD implant, or death.
Evidence regarding the seasonality of urinary tract infection (UTI) consultations in primary care is conflicting and methodologically poor. To our knowledge, this is the first study to determine whether this seasonality exists in the UK, identify the peak months and describe seasonality by age. The monthly number of UTI consultations (N = 992 803) and nitrofurantoin and trimethoprim prescriptions (N = 1 719 416) during 2008–2015 was extracted from The Health Improvement Network (THIN), a large nationally representative UK dataset of electronic patient records. Negative binomial regression models were fitted to these data to investigate seasonal fluctuations by age group (14–17, 18–24, 25–45, 46–69, 70–84, 85+) and by sex, accounting for a change in the rate of UTI over the study period. A September to November peak in UTI consultation incidence was observed for ages 14–69. This seasonality progressively faded in older age groups and no seasonality was found in individuals aged 85+, in whom UTIs were most common. UTIs were rare in males but followed a similar seasonal pattern than in females. We show strong evidence of an autumnal seasonality for UTIs in individuals under 70 years of age and a lack of seasonality in the very old. These findings should provide helpful information when interpreting surveillance reports and the results of interventions against UTI.
The Functional Visual Field (FVF) offers explanatory power. To us, it relates to existing literature on the flexibility of attentional focus in visual search and reading (Eriksen & St. James 1986; McConkie & Rayner 1975). The target article promotes reflection on existing findings. Here we consider the FVF as a mechanism in the Prevalence Effect (PE) in visual search.
Many adults with autism spectrum disorder (ASD) remain undiagnosed. Specialist assessment clinics enable the detection of these cases, but such services are often overstretched. It has been proposed that unnecessary referrals to these services could be reduced by prioritizing individuals who score highly on the Autism-Spectrum Quotient (AQ), a self-report questionnaire measure of autistic traits. However, the ability of the AQ to predict who will go on to receive a diagnosis of ASD in adults is unclear.
We studied 476 adults, seen consecutively at a national ASD diagnostic referral service for suspected ASD. We tested AQ scores as predictors of ASD diagnosis made by expert clinicians according to International Classification of Diseases (ICD)-10 criteria, informed by the Autism Diagnostic Observation Schedule-Generic (ADOS-G) and Autism Diagnostic Interview-Revised (ADI-R) assessments.
Of the participants, 73% received a clinical diagnosis of ASD. Self-report AQ scores did not significantly predict receipt of a diagnosis. While AQ scores provided high sensitivity of 0.77 [95% confidence interval (CI) 0.72–0.82] and positive predictive value of 0.76 (95% CI 0.70–0.80), the specificity of 0.29 (95% CI 0.20–0.38) and negative predictive value of 0.36 (95% CI 0.22–0.40) were low. Thus, 64% of those who scored below the AQ cut-off were ‘false negatives’ who did in fact have ASD. Co-morbidity data revealed that generalized anxiety disorder may ‘mimic’ ASD and inflate AQ scores, leading to false positives.
The AQ's utility for screening referrals was limited in this sample. Recommendations supporting the AQ's role in the assessment of adult ASD, e.g. UK NICE guidelines, may need to be reconsidered.
The ultimate goal of upper-limb rehabilitation after stroke is to promote real-world use, that is, use of the paretic upper-limb in everyday activities outside the clinic or laboratory. Although real-world use can be collected through self-report questionnaires, an objective indicator is preferred. Accelerometers are a promising tool. The current paper aims to explore the feasibility of accelerometers to measure upper-limb use after stroke and discuss the translation of this measurement tool into clinical practice. Accelerometers are non-invasive, wearable sensors that measure movement in arbitrary units called activity counts. Research to date indicates that activity counts are a reliable and valid index of upper-limb use. While most accelerometers are unable to distinguish between the type and quality of movements performed, recent advancements have used accelerometry data to produce clinically meaningful information for clinicians, patients, family and care givers. Despite this, widespread uptake in research and clinical environments remains limited. If uptake was enhanced, we could build a deeper understanding of how people with stroke use their arm in real-world environments. In order to facilitate greater uptake, however, there is a need for greater consistency in protocol development, accelerometer application and data interpretation.