We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A Canadian outbreak investigation into a cluster of Escherichia coli O121 was initiated in late 2016. When initial interviews using a closed-ended hypothesis-generating questionnaire did not point to a common source, cases were centrally re-interviewed using an open-ended approach. The open-ended interviews led cases to describe exposures with greater specificity, as well as food preparation activities. Data collected supported hypothesis generation, particularly with respect to flour exposures. In March 2017, an open sample of Brand X flour from a case home, and a closed sample collected at retail of the same brand and production date, tested positive for the outbreak strain of E. coli O121. In total, 76% (16/21) of cases reported that they used or probably used Brand X flour or that it was used or probably was used in the home during their exposure period. Crucial hypothesis-generating techniques used during the course of the investigation included a centralised open-ended interviewing approach and product sampling from case homes. This was the first outbreak investigation in Canada to identify flour as the source of infection.
An examination of invasive procedure cancellations found that the lack of pre-procedural oral screening was a preventable cause, for children with congenital heart disease. The purpose of this study was to implement an oral screening tool within the paediatric cardiology clinic, with referral to paediatric dental providers for positive screens. The target population were children aged ≥6 months to <18 years old, being referred for cardiac procedures.
Methods:
The quality implementation framework method was used for this study design. The multi-modal intervention included education, audit and feedback, screening guidelines, environmental support, and interdisciplinary collaboration. Baseline rates for oral screenings were determined by retrospective chart audit from January 2018 to January 2019 (n = 211). Provider adherence to the oral screening tool was the outcome measure. Positive oral screens, resulting in referral to the paediatric dental clinic, were measured as a secondary outcome. Provider adherence rates were used as a process measure.
Results:
Data collected over 14 weeks showed a 29% increase in documentation of oral screenings prior to referral, as compared to the retrospective chart audit. During the study period, 13% of completed screenings were positive (n = 5). Provider compliance for the period was averaged at 70% adherence.
Conclusion:
A substantial increase in pre-procedural oral screenings by paediatric cardiologists was achieved using the quality implementation framework and targeted interventions.
Field studies were conducted in 2017 and 2018 in Arkansas to evaluate the injury caused by herbicides on soybean canopy formation and yield. Fomesafen, acifluorfen, S-metolachlor + fomesafen, and S-metolachlor + fomesafen + chlorimuron alone and in combination with glufosinate were applied to glufosinate-resistant soybean at the V2 growth stage. Soybean injury resulting from these labeled herbicide treatments ranged from 9% to 25% at 2 wk after application. This level of injury resulted in a 4-, 5-, 6-, and 6-d delay in soybean reaching 80% groundcover following fomesafen, acifluorfen, S-metolachlor + fomesafen, and S-metolachlor + fomesafen + chlorimuron, respectively. There was a 2-d delay in soybean reaching a canopy volume of 15,000 cm3 following each of the four herbicide treatments. The addition of glufosinate to the herbicide applications resulted in longer delays in canopy formation with every herbicide treatment except glufosinate + fomesafen. Fomesafen, acifluorfen, S-metolachlor + fomesafen, and S-metolachlor + fomesafen + chlorimuron, each applied with glufosinate, delayed soybean from reaching 80% groundcover by 2, 7, 8, and 9 d, respectively, and delayed the number of days for soybean to reach a canopy volume of 15,000 cm3 by 2, 3, 2, and 2 d, respectively. No yield loss occurred with any herbicide application. A delay in percent groundcover in soybean allows sunlight to reach the soil surface for longer periods throughout the growing season, possibly promoting late-season weed germination and the need for an additional POST herbicide application.
Introduction: Emergency Department (ED) crowding is an intensifying crisis. While input, throughput, and output factors all contribute to crowding, throughput factors are the most dependent on ED staff and process. Diagnostic testing is a fundamental ED process that has not been systematically evaluated. We present a systematic review of interventions designed to reduce ED length of stay (LOS) by optimizing laboratory or imaging turnaround time, or by introducing point-of-care testing (POCT). Methods: We conducted systematic database searches in Medline, Embase, CINAHL, and the Cochrane Central Register of Controlled Trials without filters or language restrictions, of all interventions on diagnostic technology that affected ED throughput (PROSPERO:CRD42019125651). Studies were screened by two independent reviewers. Study quality was assessed using the Cochrane ROB-2 tools for randomized controlled trials (RCTs), and the National Heart, Lung, and Blood Institute tool for all other study designs. Results: 18 studies met inclusion criteria (Cohen's kappa = 0.69). Study results were not pooled due to high statistical heterogeneity as assessed by chi-squared and I-squared statistics. 12 POCT intervention studies reported LOS changes ranging from -114 to + 8 minutes (-26.8% to + 3.8%), although three were non-significant findings. Four studies that initiated POCT or lab-ordering at triage reported LOS reductions ranging from 22 to 174 minutes, but only one of these, at 29 minutes (16%), was statistically significant. One study of improved laboratory troponin processing reported a LOS reduction of 43 minutes (12.3%). Another, which allowed triage nurses to order ankle x-rays using the Ottawa ankle rules, reported a non-significant LOS reduction of 28 minutes for patients with ankle injuries. LOS improvements reflected the population of patients who underwent the testing modality, rather than overall ED LOS. Seven studies had low risk of bias, 11 studies had some risk of bias, and no studies had high risk of bias (Cohen's kappa = 0.58). Conclusion: Eleven of 18 diagnostic testing studies reported LOS reductions. POCT was the most common intervention type, and usually reduced EDLOS within relevant patient subsets, while triage-initiated testing generally did not. To aid widespread adoption, future research should focus on interrupted time series or RCT designs, and more comprehensive descriptions of the contextual factors affecting implementation of these interventions.
Introduction: Emergency Department (ED) crowding is the primary threat to emergency care quality. Input and outflow factors are important factors, but EDs must optimize throughput efficiency by improving internal processes from triage to disposition, and triage is the first throughput phase. Triage throughput interventions exclude strategies that direct patients away from the ED (these modify input rather than throughput). Previous research has described physicians in triage, team triage, telemedical triage, and nurse practitioner (NP) or physician assistant (PA) led triage, but their impact has never been systematically evaluated. Methods: We conducted systematic database searches in Medline, Embase, CINAHL, and the Cochrane Central Register of Controlled Trials without the use of filters or language restrictions of all triage interventions that effected ED throughput (PROSPERO:CRD42019125651). Two independent reviewers screened studies. Study quality was assessed using the Cochrane Risk of Bias tool (version 2) for randomized controlled trials, and the National Heart, Lung, and Blood Institute quality assessment tool for other designs. Results: 18 studies met inclusion criteria (Cohen's k = 0.69). Study results were not pooled due to high statistical heterogeneity as assessed by chi-squared and I-squared statistics. Studies were grouped into physician led, NP or PA led, and team triage interventions. Six physician in triage interventions reported LOS changes between -82 and + 18 minutes. Five NP/PA led triage interventions resulted in LOS changes of -106 to + 19 minutes. Five team triage interventions reported LOS reductions of 4 to 34 minutes. One telemedicine triage study reported a non-significant 8 minute increase in LOS. Six physician at triage interventions yielded significant LWBS rate improvement (relative risk {RR}= 0.29-0.82). Team triage interventions generated LWBS rate changes ranging from meaningful improvement (RR = 0.58) to substantial deterioration (RR = 1.68). Five studies have low risk of bias, 11 studies have some risk of bias, and 2 studies have high risk of bias (Cohen's kappa = 0.58). Conclusion: Fourteen of 18 triage interventions reduced EDLOS and/or LWBS rate. Physician, NP and PA led triage were the most effective triage interventions. To aid widespread adoption, future research should focus on interrupted time series or RCT designs, and more comprehensive descriptions of the contextual factors affecting implementation of these interventions.
Introduction: Emergency Department (ED) crowding is an international health system issue that is worsening. Further, ED crowding and “hallway medicine” has been identified as one of the most significant healthcare challenges currently facing Canadians. One contributor is preventable transfers from long-term care facilities (LTCFs) to Emergency Departments (EDs). In Canada, there were 63,752 LTCF patient transfers to the ED in 2014, with 24% (15,202) of them due to potentially preventable conditions. Each preventable transfer exposes patients to transport and hospital-related complications, and costs the healthcare system thousands of dollars. There have been many proposed and studied interventions aimed at alleviating the issue, but few attempts to assess and evaluate different interventions across institutions in a systematic manner. Methods: A scoping review of the literature using three electronic databases was conducted. A scoping review methodology was used due to the range of interventions and the heterogeneity in study design and outcome. Inclusion criteria included: studies on interventions designed to reduce transfers from LTCFs, studies that reported key outcomes such as number of ED transfers, and studies with a control or comparison group. Articles were screened by two independent reviewers (Cohen's k = 0.68), and study quality was assessed using the National Heart, Lung, Blood Institute quality assessment tools. Results: Findings were organized into five intervention types (telemedicine, outreach teams, interdisciplinary teams, integrated approaches, and other), and both a tabular and narrative synthesis was completed. Eleven studies had a good quality assessment rating, 13 studies had a fair rating, and two studies had a poor rating. Twenty out of the 26 studies reported statistically significant reductions in ED transfer rate, ranging from 10-70%. Interdisciplinary healthcare teams staffed within LTCFs were the most effective interventions. Conclusion: There are several promising interventions that have successfully reduced the number of preventable transfers from LTCFs to EDs, in a variety of health system settings. Further analysis of the relative resource requirements of each intervention, and practices that can enable successful implementation are needed to inform healthcare policy and administrative decision making. Widespread implementation of these interventions has the potential to considerably reduce ED crowding.
Palmer amaranth is one of the most troublesome weeds of soybean in the United States. To effectively control this weed it is necessary to optimize timing of PRE residual herbicides to mitigate Palmer amaranth emergence. Field studies were conducted in 5 site-years to assess the effect of application timing 12 to 16 d prior to planting (preplant) and at planting (PRE) on soybean injury and longevity of Palmer amaranth control using five residual herbicide treatments. A reduction in longevity of Palmer amaranth control was observed when S-metolachlor + metribuzin and flumioxazin + chlorimuron-ethyl were applied preplant vs. PRE in 2 of the 5 site years. Sulfentrazone, sulfentrazone + cloransulam-methyl, and saflufenacil + dimethenamid-P + pyroxasulfone + metribuzin did not reduce longevity of Palmer amaranth control when applied preplant vs. PRE in all 5 site-years. Visible estimates of soybean injury were lower at 21 d after planting when herbicides were applied 12 to 16 d preplant vs. PRE. These findings suggest that preplant applications can be used to reduce the potential for crop injury and may not result in reduced longevity of control when herbicides with a prolonged residual activity are used. Preplant herbicides increase the likelihood of the residuals being activated prior to subsequent weed emergence as opposed to PRE herbicides applied at soybean planting.
Rapid crop canopy formation is important to reduce weed emergence and selection for herbicide resistance. Field experiments were conducted in 2017 and 2018 in Fayetteville, AR, to evaluate the impacts of PRE applications of flumioxazin on soybean injury, soybean density, canopy formation, and incidence of soil-borne pathogens. Flumioxazin was applied at 0, 70, and 105 g ai ha−1 to predetermined flumioxazin-tolerant and -sensitive soybean varieties. Flumioxazin at 70 g ha−1 injured the tolerant and sensitive varieties from 0% to 4% and 14% to 15%, respectively. When averaged over flumioxazin rates, density of the sensitive variety was only reduced in 2017 when activation of flumioxazin was delayed 7 d. Compared to the tolerant soybean variety, flumioxazin at 70 g ha−1 delayed the sensitive variety from reaching 20%, 40%, 60%, and 80% groundcover by 15, 16, 11, and 5 d, respectively. No delay in canopy closure (95% groundcover) was observed with either variety. Consequently, no yield loss occurred for either variety following a flumioxazin application. Flumioxazin did not impact root colonization of Didymella, Fusarium, Macrophomina, or Rhizoctonia. Pythium colonization of the soybean stem was increased by flumioxazin in 2017, but not in 2018. Increased injury, delays in percent groundcover, and an increase in Pythium colonization of soybean following a flumioxazin application may warrant the need for other soil-applied herbicides at soybean planting. Alternatively, soybean injury and delays in percent groundcover following flumioxazin applications can be mitigated through appropriate variety selection; however, comprehensive screening is needed to determine which varieties are most tolerant to flumioxazin.
Negative symptoms have been previously reported during the psychosis prodrome, however our understanding of their relationship with treatment-phase negative symptoms remains unclear.
Objectives:
We report the prevalence of psychosis prodrome onset negative symptoms (PONS) and ascertain whether these predict negative symptoms at first presentation for treatment.
Methods:
Presence of expressivity or experiential negative symptom domains was established at first presentation for treatment using the Scale for Assessment of Negative Symptoms (SANS) in 373 individuals with a first episode psychosis. PONS were established using the Beiser Scale. The relationship between PONS and negative symptoms at first presentation was ascertained and regression analyses determined the relationship independent of confounding.
Results:
PONS prevalence was 50.3% in the schizophrenia spectrum group (n = 155) and 31.2% in the non-schizophrenia spectrum group (n = 218). In the schizophrenia spectrum group, PONS had a significant unadjusted (χ2 = 10.41, P < 0.001) and adjusted (OR = 2.40, 95% CI = 1.11–5.22, P = 0.027) association with first presentation experiential symptoms, however this relationship was not evident in the non-schizophrenia spectrum group. PONS did not predict expressivity symptoms in either diagnostic group.
Conclusion:
PONS are common in schizophrenia spectrum diagnoses, and predict experiential symptoms at first presentation. Further prospective research is needed to examine whether negative symptoms commence during the psychosis prodrome.
Almost all cases of human listeriosis are foodborne, however the proportion where specific exposures are identified is small. Between 1981 and 2015, 5252 human listeriosis cases were reported in England and Wales. The purpose of this study was to summarise data where consumption of specific foods was identified with transmission and these comprised 11 sporadic cases and 17 outbreaks. There was a single outbreak in the community of 378 cases (7% of the total) which was associated with pâté consumption and 112 cases (2% of the total) attributed to specific foods in all the other incidents. The proportion of food-attributed cases increased during this study with improvements in typing methods for Listeria monocytogenes. Ten incidents (one sporadic case and nine outbreaks of 2–9 cases over 4 days to 32 months) occurred in hospitals: all were associated with the consumption of pre-prepared sandwiches. The 18 community incidents comprised eight outbreaks (seven of between 3 and 17 cases) and 10 sporadic cases: food of animal origin was implicated in 16 of the incidents (sliced or potted meats, pork pies, pâté, liver, chicken, crab-meat, butter and soft cheese) and food of non-animal origin in the remaining two (olives and vegetable rennet).
Frascati international research criteria for HIV-associated neurocognitive disorders (HAND) are controversial; some investigators have argued that Frascati criteria are too liberal, resulting in a high false positive rate. Meyer et al. recommended more conservative revisions to HAND criteria, including exploring other commonly used methodologies for neurocognitive impairment (NCI) in HIV including the global deficit score (GDS). This study compares NCI classifications by Frascati, Meyer, and GDS methods, in relation to neuroimaging markers of brain integrity in HIV.
Method:
Two hundred forty-one people living with HIV (PLWH) without current substance use disorder or severe (confounding) comorbid conditions underwent comprehensive neurocognitive testing and brain structural magnetic resonance imaging and magnetic resonance spectroscopy. Participants were classified using Frascati criteria versus Meyer criteria: concordant unimpaired [Frascati(Un)/Meyer(Un)], concordant impaired [Frascati(Imp)/Meyer(Imp)], or discordant [Frascati(Imp)/Meyer(Un)] which were impaired via Frascati criteria but unimpaired via Meyer criteria. To investigate the GDS versus Meyer criteria, the same groupings were utilized using GDS criteria instead of Frascati criteria.
Results:
When examining Frascati versus Meyer criteria, discordant Frascati(Imp)/Meyer(Un) individuals had less cortical gray matter, greater sulcal cerebrospinal fluid volume, and greater evidence of neuroinflammation (i.e., choline) than concordant Frascati(Un)/Meyer(Un) individuals. GDS versus Meyer comparisons indicated that discordant GDS(Imp)/Meyer(Un) individuals had less cortical gray matter and lower levels of energy metabolism (i.e., creatine) than concordant GDS(Un)/Meyer(Un) individuals. In both sets of analyses, the discordant group did not differ from the concordant impaired group on any neuroimaging measure.
Conclusions:
The Meyer criteria failed to capture a substantial portion of PLWH with brain abnormalities. These findings support continued use of Frascati or GDS criteria to detect HIV-associated CNS dysfunction.
A national need is to prepare for and respond to accidental or intentional disasters categorized as chemical, biological, radiological, nuclear, or explosive (CBRNE). These incidents require specific subject-matter expertise, yet have commonalities. We identify 7 core elements comprising CBRNE science that require integration for effective preparedness planning and public health and medical response and recovery. These core elements are (1) basic and clinical sciences, (2) modeling and systems management, (3) planning, (4) response and incident management, (5) recovery and resilience, (6) lessons learned, and (7) continuous improvement. A key feature is the ability of relevant subject matter experts to integrate information into response operations. We propose the CBRNE medical operations science support expert as a professional who (1) understands that CBRNE incidents require an integrated systems approach, (2) understands the key functions and contributions of CBRNE science practitioners, (3) helps direct strategic and tactical CBRNE planning and responses through first-hand experience, and (4) provides advice to senior decision-makers managing response activities. Recognition of both CBRNE science as a distinct competency and the establishment of the CBRNE medical operations science support expert informs the public of the enormous progress made, broadcasts opportunities for new talent, and enhances the sophistication and analytic expertise of senior managers planning for and responding to CBRNE incidents.
Methamphetamine (MA) dependence contributes to neurotoxicity and neurocognitive deficits. Although combined alcohol and MA misuse is common, how alcohol consumption relates to neurocognitive performance among MA users remains unclear. We hypothesized that alcohol and MA use would synergistically diminish neurocognitive functioning, such that greater reported alcohol consumption would exert larger negative effects on neurocognition among MA-dependent individuals compared to MA-nonusing persons.
Methods:
Eighty-seven MA-dependent (MA+) and 114 MA-nonusing (MA−) adults underwent neuropsychological and substance use assessments. Linear and logistic regressions examined the interaction between MA status and lifetime average drinks per drinking day on demographically corrected global neurocognitive T scores and impairment rates, controlling for recent alcohol use, lifetime cannabis use, WRAT reading performance, and lifetime depression.
Results:
MA+ displayed moderately higher rates of impairment and lower T scores compared to MA−. Lifetime alcohol use significantly interacted with MA status to predict global impairment (ORR = 0.70, p = .003) such that greater lifetime alcohol use increased likelihood of impairment in MA−, but decreased likelihood of impairment in MA+. Greater lifetime alcohol use predicted poorer global T scores among MA− (b = −0.44, p = .030) but not MA+ (b = 0.08, p = .586).
Conclusions:
Contrary to expectations, greater lifetime alcohol use related to reduced risk of neurocognitive impairment among MA users. Findings are supported by prior research identifying neurobiological mechanisms by which alcohol may attenuate stimulant-driven vasoconstriction and brain thermotoxicity. Replication and examination of neurophysiologic mechanisms underlying alcohol use in the context of MA dependence are warranted to elucidate whether alcohol confers a degree of neuroprotection.
Objectives: Studies of neurocognitively elite older adults, termed SuperAgers, have identified clinical predictors and neurobiological indicators of resilience against age-related neurocognitive decline. Despite rising rates of older persons living with HIV (PLWH), SuperAging (SA) in PLWH remains undefined. We aimed to establish neuropsychological criteria for SA in PLWH and examined clinically relevant correlates of SA. Methods: 734 PLWH and 123 HIV-uninfected participants between 50 and 64 years of age underwent neuropsychological and neuromedical evaluations. SA was defined as demographically corrected (i.e., sex, race/ethnicity, education) global neurocognitive performance within normal range for 25-year-olds. Remaining participants were labeled cognitively normal (CN) or impaired (CI) based on actual age. Chi-square and analysis of variance tests examined HIV group differences on neurocognitive status and demographics. Within PLWH, neurocognitive status differences were tested on HIV disease characteristics, medical comorbidities, and everyday functioning. Multinomial logistic regression explored independent predictors of neurocognitive status. Results: Neurocognitive status rates and demographic characteristics differed between PLWH (SA=17%; CN=38%; CI=45%) and HIV-uninfected participants (SA=35%; CN=55%; CI=11%). In PLWH, neurocognitive groups were comparable on demographic and HIV disease characteristics. Younger age, higher verbal IQ, absence of diabetes, fewer depressive symptoms, and lifetime cannabis use disorder increased likelihood of SA. SA reported increased independence in everyday functioning, employment, and health-related quality of life than non-SA. Conclusions: Despite combined neurological risk of aging and HIV, youthful neurocognitive performance is possible for older PLWH. SA relates to improved real-world functioning and may be better explained by cognitive reserve and maintenance of cardiometabolic and mental health than HIV disease severity. Future research investigating biomarker and lifestyle (e.g., physical activity) correlates of SA may help identify modifiable neuroprotective factors against HIV-related neurobiological aging. (JINS, 2019, 25, 507–519)
Antibiotic use tracking in nursing homes is necessary for stewardship and regulatory requirements but may be burdensome. We used pharmacy data to evaluate whether once-weekly sampling of antibiotic use can estimate total use; we found no significant differences in estimated and measured antibiotic use.
Racial/ethnic minorities are more vulnerable to mental and physical health problems, but we know little about the psychobiological underpinnings of these disparities. In this study, we examined racial/ethnic differences in cortisol diurnal patterns and affect as initial steps toward elucidating long-term health disparities. A racially/ethnically diverse (39.5% White, 60.5% minority) sample of 370 adolescents (57.3% female) between the ages of 11.9 and 18 years (M = 14.65 years, SD = 1.39) participated in this study. These adolescents provided 16 cortisol samples (4 samples per day across 4 days), allowing the computation of diurnal cortisol slopes, the cortisol awakening response, and diurnal cortisol output (area under the curve), as well as daily diary ratings of high-arousal and low-arousal positive and negative affect. Consistent with prior research, we found that racial/ethnic minorities (particularly African American and Latino youth) exhibited flatter diurnal cortisol slopes compared to White youth, F (1, 344.7) = 5.26, p = .02, effect size g = 0.25. Furthermore, African American and Asian American youth reported lower levels of positive affect (both high arousal and low arousal) compared to White youth. Racial/ethnic differences in affect did not explain differences in cortisol patterns, suggesting a need to refine our models of relations between affect and hypothalamic–pituitary–adrenocortical activity. We conclude by proposing that a deeper understanding of cultural development may help elucidate the complex associations between affect and hypothalamic–pituitary–adrenocortical functioning and how they explain racial/ethnic differences in both affect and stress biology.
A total of 592 people reported gastrointestinal illness following attendance at Street Spice, a food festival held in Newcastle-upon-Tyne, North East England in February/March 2013. Epidemiological, microbiological and environmental investigations were undertaken to identify the source and prevent further cases. Several epidemiological analyses were conducted; a cohort study; a follow-up survey of cases and capture re-capture to estimate the true burden of cases. Indistinguishable isolates of Salmonella Agona phage type 40 were identified in cases and on fresh curry leaves used in one of the accompaniments served at the event. Molecular testing indicated entero-aggregative Escherichia coli and Shigella also contributed to the burden of illness. Analytical studies found strong associations between illness and eating food from a particular stall and with food items including coconut chutney which contained fresh curry leaves. Further investigation of the food supply chain and food preparation techniques identified a lack of clear instruction on the use of fresh uncooked curry leaves in finished dishes and uncertainty about their status as a ready-to-eat product. We describe the investigation of one of the largest outbreaks of food poisoning in England, involving several gastrointestinal pathogens including a strain of Salmonella Agona not previously seen in the UK.
OBJECTIVES/SPECIFIC AIMS: Smoking during pregnancy (SDP) is associated with negative health outcomes, both proximal (e.g., preterm labor, cardiovascular changes, low birth weight) and distal (e.g., increased child externalizing behaviors and attention deficit/hyperactivity disorder (ADHD) symptoms, increased risk of child smoking). As pregnancy provides a unique, strong incentive to quit smoking, investigating SDP allows analysis of individual predictive factors of recalcitrant smoking behaviors. Utilizing a female twin-pair cohort provides a model system for characterizing genotype×environment interactions using statistical approaches. METHODS/STUDY POPULATION: Using women from the Missouri Adolescent Female Twin Study, parental report of twin ADHD inattentive and hyperactive symptoms at twin median age 15, and twin report of DSM-IV lifetime diagnosis of major depressive disorder, trauma exposure (physical assault and childhood sexual abuse), collected at median age 22, were merged with Missouri birth record data for enrolled twins, leading to 1553 individuals of European ancestry and 163 individuals of African-American ancestry included in final analyses. A SDP propensity score was calculated from sociodemographic variables (maternal age, marital status, educational attainment, first born child) and used as a 6-level ordinal covariate in subsequent logistic regressions. RESULTS/ANTICIPATED RESULTS: For European ancestry individuals, parental report of hyperactive ADHD symptoms and exposure to childhood sexual abuse were predictive of SDP, while a lifetime diagnosis of major depressive disorder, parental report of inattentive ADHD symptoms, and exposure to assaultive trauma were all not significantly predictive of future SDP. For African-American individuals, none of these variables were significant in predicting future SDP. DISCUSSION/SIGNIFICANCE OF IMPACT: Understanding this relationship of risk-mechanisms is important for clinical understanding of early predictors of SDP and tailoring interventions to at risk individuals. Ultimately, the focus of this research is to mitigate risk to pregnant smokers and their children. Additionally, the cohort-ecological approach informs how well research and administrative (vital record) data agree. This allows for evaluation of whether administrative data improve prediction in research cohorts, and conversely if research data improve prediction over standard sociodemographic variables available in administrative data.
Neuropsychological assessment tools are the staple of our field. The development of standardized metrics sensitive to brain-behavior relationships has shaped the neuropsychological questions we can ask, our understanding of discrete brain functions, and has informed the detection and treatment of neurological disorders. We identify key turning points and innovations in neuropsychological assessment over the past 40–50 years that highlight how the tools used in common practice today came to be. Also selected for emphasis are several exciting lines of research and novel approaches that are underway to further probe and characterize brain functions to enhance diagnostic and treatment outcomes. We provide a brief historical review of different clinical neuropsychological assessment approaches (Lurian, Flexible and Fixed Batteries, Boston Process Approach) and critical developments that have influenced their interpretation (normative standards, cultural considerations, longitudinal change, common metric batteries, and translational assessment constructs). Lastly, we discuss growing trends in assessment including technological advances, efforts to integrate neuropsychology across disciplines (e.g., primary care), and changes in neuropsychological assessment infrastructure. Neuropsychological assessment has undergone massive growth in the past several decades. Nonetheless, there remain many unanswered questions and future challenges to better support measurement tools and translate assessment findings into meaningful recommendations and treatments. As technology and our understanding of brain function advance, efforts to support infrastructure for both traditional and novel assessment approaches and integration of complementary brain assessment tools from other disciplines will be integral to inform brain health treatments and promote the growth of our field. (JINS, 2017, 23, 778–790)