To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Psychiatry draws widely upon insights from many realms ranging from public health, the social sciences, and the humanities. As psychiatric disorders affect mood, cognition, perception, emotion, and behavior, brain science is recognized as foundational to understanding their pathophysiology. Along with the disciplines of neurology, neurosurgery, and neuroradiology, psychiatry is often regarded as one of the clinical neurosciences.
This essay contests the prevalent view of Richard Wright as a proponent of violent black masculinity. Beginning with Uncle Tom’s Children, I argue, Wright provides a radical critique of the very ‘macho’ violent wish-fulfillment he has been accused of endorsing. Stories such as "Big Boy Leaves Home," "Down by the Riverside," "Long Black Song" and "Ethics of Living Jim Crow" underscore the cruel ironic bind of black masculinity under Jim Crow: Black male children can be punished as “men” for the slightest perceived misstep, even while grown Black men are forced to assume the permanent position of “boys,” forever deferential to white authority. Wright confronts us with the trauma of black male vulnerability, while also interrogating the complex and contradictory psychological reactions and socio-political responses such vulnerability gives rise to. His work grasps the impulse to black masculinism as an understandable response to the particular historical circumstances of Jim Crow, while at the same time underscoring the strategic liability of such violent and individualist reactions. Ultimately, Wright suggests that it is only the concerted response of the larger black community that offers black boys and men alike a chance of meaningful resistance.
Critically ill patients requiring extracorporeal membrane oxygenation (ECMO) frequently require interhospital transfer to a center that has ECMO capabilities. Patients receiving ECMO were evaluated to determine whether interhospital transfer was a risk factor for subsequent development of a nosocomial infection.
Retrospective cohort study.
A 425-bed academic tertiary-care hospital.
All adult patients who received ECMO for >48 hours between May 2012 and May 2020.
The rate of nosocomial infections for patients receiving ECMO was compared between patients who were cannulated at the ECMO center and patients who were cannulated at a hospital without ECMO capabilities and transported to the ECMO center for further care. Additionally, time to infection, organisms responsible for infection, and site of infection were compared.
In total, 123 patients were included in analysis. For the primary outcome of nosocomial infection, there was no difference in number of infections per 1,000 ECMO days (25.4 vs 29.4; P = .03) by univariate analysis. By Cox proportional hazard analysis, transport was not significantly associated with increased infections (hazard ratio, 1.7; 95% confidence interval, 0.8–4.2; P = .20).
In this study, we did not identify an increased risk of nosocomial infection during subsequent hospitalization. Further studies are needed to identify sources of nosocomial infection in this high-risk population.
Iron deficiency (ID) in early life is associated with morbidities. Most fetal iron required for infant growth is acquired in the third trimester from maternal iron store. However, how prenatal iron level affects ferritin level in early infancy remains controversial. This study aimed to examine the associations between maternal ferritin levels and cord blood serum ferritin (CBSF) and to compare the ferritin levels between different feeding practices in early infancy. Healthy Chinese mothers with uncomplicated pregnancy and their infants were followed up at 3 months post-delivery for questionnaire completion and infant blood collection. Infants who were predominantly breastfed and those who were predominantly formula fed were included in this analysis. Serum ferritin levels were measured in maternal blood samples collected upon delivery, cord blood and infant blood samples at 3 months of age. Ninety-seven mother–baby dyads were included. Maternal ID is common (56 %) while the CBSF levels were significantly higher than maternal ferritin levels. Only three infants (3 %) had ID at 3 months of age. There were no significant correlations between maternal ferritin levels with CBSF (r 0·168, P = 0·108) nor with infant ferritin levels at 3 months of age (r 0·023, P = 0·828). Infant ferritin levels at 3 months were significantly and independently associated with CBSF (P = 0·007) and birth weight (P < 0·001) after adjusting for maternal age, parity, maternal education, infant sex and feeding practice. In conclusion, maternal ID was common upon delivery. However, maternal ferritin levels were not significantly associated with CBSF concentrations nor infant ferritin concentrations at 3 months of age.
Patients with single-ventricle CHD undergo a series of palliative surgeries that culminate in the Fontan procedure. While the Fontan procedure allows most patients to survive to adulthood, the Fontan circulation can eventually lead to multiple cardiac complications and multi-organ dysfunction. Care for adolescents and adults with a Fontan circulation has begun to transition from a primarily cardiac-focused model to care models, which are designed to monitor multiple organ systems, and using clues from this screening, identify patients who are at risk for adverse outcomes. The complexity of care required for these patients led our centre to develop a multidisciplinary Fontan Management Programme with the primary goals of earlier detection and treatment of complications through the development of a cohesive network of diverse medical subspecialists with Fontan expertise.
Approved treatments for bipolar depression are limited and associated with a spectrum of undesirable side effects. Lumateperone (lumateperone tosylate, ITI−007), a mechanistically novel antipsychotic that simultaneously modulates serotonin, dopamine, and glutamate neurotransmission, is FDA-approved for the treatment of schizophrenia. Lumateperone is currently being investigated for the treatment of bipolar depression (major depressive episodes [MDE] associated with bipolar I and bipolar II disorder). This Phase 3 randomized, double-blind, parallel-group, placebo-controlled multinational study (NCT03249376) investigated the efficacy and safety of lumateperone in patients with bipolar I or bipolar II disorder experiencing a MDE.
Patients (18 75 years) with a clinical diagnosis of bipolar I or bipolar II disorder who were experiencing a MDE (Montgomery-Åsberg Depression Rating Scale [MADRS] Total score =20 and a Clinical Global Impression Scale-Bipolar Version-Severity [CGI-BP-S] score =4 at screening and baseline) were randomized to lumateperone 42mg or placebo for 6 weeks. The primary and key secondary efficacy endpoints were change from baseline to Day 43 in MADRS total score and CGI-BP-S scores, respectively. Secondary efficacy outcomes included response (MADRS improvement = 50%) and remission (MADRS total score =12) at Day 43. Safety assessments included treatment emergent adverse events, laboratory parameters, vital signs, extrapyramidal symptoms (EPS), and suicidality.
In this study, 377 patients received treatment (placebo, n=189; lumateperone 42mg, n=188) and 333 completed treatment. Patients in the lumateperone 42-mg group had significantly greater mean improvement on MADRS total score change from baseline to Day 43 compared with placebo (least squares mean difference [LSMD]=-4.6; 95% confidence interval [CI]=-6.34, −2.83; effect size vs placebo [ES]=-0.56; P<.0001). Lumateperone treatment was associated with significant MADRS improvement in both patients with bipolar I (LSMD=-4.0; 95% CI=-5.92, −1.99; ES=-0.49; P<.0001) and bipolar II (LSMD=-7.0; 95% CI=-10.92, −3.16; ES=-0.81; P=.0004). The lumateperone 42-mg group also had significantly greater mean improvement in CGI-BP-S total score compared with placebo (LSMD=-0.9; 95% CI=-1.37, −0.51; ES=-0.46; P<.001). Lumateperone compared with placebo had significantly greater MADRS response rate (51.1% vs 36.7%; odds ratio=2.98; P<.001) and remission rates (P=.02) at Day 43. Lumateperone treatment was well tolerated, with minimal risk of EPS, metabolic, and prolactin side effects.
Lumateperone 42 mg significantly improved depression symptoms in both patients with bipolar I and bipolar II depression. Lumateperone was generally well tolerated. These results suggest that lumateperone 42 mg may be a promising new treatment for bipolar depression associated with bipolar I or bipolar II disorder.
Adolescents who hold an entity theory of personality – the belief that people cannot change – are more likely to report internalizing symptoms during the socially stressful transition to high school. It has been puzzling, however, why a cognitive belief about the potential for change predicts symptoms of an affective disorder. The present research integrated three models – implicit theories, hopelessness theories of depression, and the biopsychosocial model of challenge and threat – to shed light on this issue. Study 1 replicated the link between an entity theory and internalizing symptoms by synthesizing multiple datasets (N = 6,910). Study 2 examined potential mechanisms underlying this link using 8-month longitudinal data and 10-day diary reports during the stressful first year of high school (N = 533, 3,199 daily reports). The results showed that an entity theory of personality predicted increases in internalizing symptoms through tendencies to make fixed trait causal attributions about the self and maladaptive (i.e., “threat”) stress appraisals. The findings support an integrative model whereby situation-general beliefs accumulate negative consequences for psychopathology via situation-specific attributions and appraisals.
ABSTRACT IMPACT: Despite its importance in systemic diseases such as diabetes, the eye is notably difficult to examine for non-specialists; this study introduces a fully automated approach for eye disease screening, coupling a deep learning algorithm with a robotically-aligned optical coherence tomography system to improve eye care in non-ophthalmology settings. OBJECTIVES/GOALS: This study aims to develop and test a deep learning (DL) method to classify images acquired from a robotically-aligned optical coherence tomography (OCT) system as normal vs. abnormal. The long-term goal of our study is to integrate artificial intelligence and robotic eye imaging to fully automate eye disease screening in diverse clinical settings. METHODS/STUDY POPULATION: Between August and October 2020, patients seen at the Duke Eye Center and healthy volunteers (age ≥18) were imaged with a custom, robotically-aligned OCT (RAOCT) system following routine eye exam. Using transfer learning, we adapted a preexisting convolutional neural network to train a DL algorithm to classify OCT images as normal vs. abnormal. The model was trained and validated on two publicly available OCT datasets and two of our own RAOCT volumes. For external testing, the top-performing model based on validation was applied to a representative averaged B-scan from each of the remaining RAOCT volumes. The model’s performance was evaluated against a reference standard of clinical diagnoses by retina specialists. Saliency maps were created to visualize the areas contributing most to the model predictions. RESULTS/ANTICIPATED RESULTS: The training and validation datasets included 87,697 OCT images, of which 59,743 were abnormal. The top-performing DL model had a training accuracy of 96% and a validation accuracy of 99%. For external testing, 43 eyes of 27 subjects were imaged with the robotically-aligned OCT system. Compared to clinical diagnoses, the model correctly labeled 18 out of 22 normal averaged B-scans and 18 out of 21 abnormal averaged B-scans. Overall, in the testing set, the model had an AUC for the detection of pathology of 0.92, an accuracy of 84%, a sensitivity of 86%, and a specificity of 82%. For the correctly predicted scans, saliency maps identified the areas contributing most to the DL algorithm’s predictions, which matched the regions of greatest clinical importance. DISCUSSION/SIGNIFICANCE OF FINDINGS: This is the first study to develop and apply a DL model to images acquired from a self-aligning OCT system, demonstrating the potential of integrating DL and robotic eye imaging to automate eye disease screening. We are working to translate this technology for use in emergency departments and primary care, where it will have the greatest impact.
To stop transmission of hepatitis B virus (HBV) and hepatitis C virus (HCV) infections in association with myocardial perfusion imaging (MPI) at a cardiology clinic.
Outbreak investigation and quasispecies analysis of HCV hypervariable region 1 genome.
Outpatient cardiology clinic.
Patients undergoing MPI.
Case patients met definitions for HBV or HCV infection. Cases were identified through surveillance registry cross-matching against clinic records and serological screening. Observations of clinic practices were performed.
During 2012–2014, 7 cases of HCV and 4 cases of HBV occurred in 4 distinct clusters among patients at a cardiology clinic. Among 3 case patients with HCV infection who had MPI on June 25, 2014, 2 had 98.48% genetic identity of HCV RNA. Among 4 case patients with HCV infection who had MPI on March 13, 2014, 3 had 96.96%–99.24% molecular identity of HCV RNA. Also, 2 clusters of 2 patients each with HBV infection had MPI on March 7, 2012, and December 4, 2014. Clinic staff reused saline vials for >1 patient. No infection control breaches were identified at the compounding pharmacy that supplied the clinic. Patients seen in clinic through March 27, 2015, were encouraged to seek testing for HBV, HCV, and human immunodeficiency virus. The clinic switched to all single-dose medications and single-use intravenous flushes on March 27, 2015, and no further cases were identified.
This prolonged healthcare-associated outbreak of HBV and HCV was most likely related to breaches in injection safety. Providers should follow injection safety guidelines in all practice settings.
The coronavirus disease 2019 (COVID-19) pandemic has had a considerable impact on US hospitalizations, affecting processes and patient population.
To evaluate the impact of COVID-19 pandemic on central-line–associated bloodstream infections (CLABSIs) and catheter associated urinary tract infections (CAUTIs) in hospitals.
We performed a retrospective study of CLABSIs and CAUTIs in 78 US 12 months before COVID-19 and 6 months during COVID-19 pandemic.
During the 2 study periods, there were 795,022 central-line days and 817,267 urinary catheter days. Compared to the period before the COVID-19 pandemic, CLABSI rates increased by 51.0% during the pandemic period from 0.56 to 0.85 per 1,000 line days (P < .001) and by 62.9% from 1.00 to 1.64 per 10,000 patient days (P < .001). Hospitals with monthly COVID-19 patients representing >10% of admissions had a National Health Safety Network (NHSN) device standardized infection ratio for CLABSI that was 2.38 times higher than hospitals with <5% prevalence during the pandemic period (P = .004). Coagulase-negative Staphylococcus CLABSIs increased by 130% from 0.07 to 0.17 events per 1,000 line days (P < .001), and Candida spp by 56.9% from 0.14 to 0.21 per 1,000 line days (P = .01). In contrast, no significant changes were identified for CAUTI (0.86 vs 0.77 per 1,000 catheter days; P = .19).
The COVID-19 pandemic was associated with substantial increases in CLABSIs but not CAUTIs. Our findings underscore the importance of hardwiring processes for optimal line care and regular feedback on performance to maintain a safe environment.
Vancomycin overuse is common, yet few data are available regarding how to improve stewardship of this antibiotic. We identify an association between use of a PCR assay to rule out MRSA pneumonia and a significant, sustained decrease in average vancomycin days of therapy over a 30-month period.
Research career development awards (CDAs) facilitate development of clinician-scientists. This study compared the academic achievements of individuals in a structured institutional “pre-K” CDA program, the Mayo Clinic Kern Scholars program, with individuals who applied for but were not admitted to the Kern program (“Kern applicants”), and awardees of other unstructured internal CDAs.
This was a longitudinal cohort study of clinicians engaged in research at Mayo Clinic between 2010 and 2019. The primary outcome was time to the 15th new peer-reviewed publication after the program start, adjusted for baseline number of publications. Secondarily, we described successful awarding of federal funding by the NIH or VA.
The median (IQR) number of baseline publications was highest among Kern Scholars compared to Kern Applicants or other CDA awardees [16 (12, 29) vs 5 (1, 11) and 8 (5, 16); P < 0.001]. After adjustment for baseline publications, the time to 15th new publication was significantly shorter for Kern Scholars than for the two comparator groups (P<0.001). Similar findings were observed with total new publications within 5 years (P < 0.001), as well as number of new first-/last-author publications within 5 years (P < 0.001). The overall frequency of K-awards, R-awards (or equivalent), or any funding were similar between groups, with the exception of R03 awards, which were significantly more common among Kern Scholars (P = 0.002).
The Kern Scholars program is a successful training model for clinician-scientists that demonstrated comparatively greater acceleration of scholarly productivity than other internal CDA programs.
Background: Automated testing instruments (ATIs) are commonly used by clinical microbiology laboratories to perform antimicrobial susceptibility testing (AST), whereas public health laboratories may use established reference methods such as broth microdilution (BMD). We investigated discrepancies in carbapenem minimum inhibitory concentrations (MICs) among Enterobacteriaceae tested by clinical laboratory ATIs and by reference BMD at the CDC. Methods: During 2016–2018, we conducted laboratory- and population-based surveillance for carbapenem-resistant Enterobacteriaceae (CRE) through the CDC Emerging Infections Program (EIP) sites (10 sites by 2018). We defined an incident case as the first isolation of Enterobacter spp (E. cloacae complex or E. aerogenes), Escherichia coli, Klebsiella pneumoniae, K. oxytoca, or K. variicola resistant to doripenem, ertapenem, imipenem, or meropenem from normally sterile sites or urine identified from a resident of the EIP catchment area in a 30-day period. Cases had isolates that were determined to be carbapenem-resistant by clinical laboratory ATI MICs (MicroScan, BD Phoenix, or VITEK 2) or by other methods, using current Clinical and Laboratory Standards Institute (CLSI) criteria. A convenience sample of these isolates was tested by reference BMD at the CDC according to CLSI guidelines. Results: Overall, 1,787 isolates from 112 clinical laboratories were tested by BMD at the CDC. Of these, clinical laboratory ATI MIC results were available for 1,638 (91.7%); 855 (52.2%) from 71 clinical laboratories did not confirm as CRE at the CDC. Nonconfirming isolates were tested on either a MicroScan (235 of 462; 50.9%), BD Phoenix (249 of 411; 60.6%), or VITEK 2 (371 of 765; 48.5%). Lack of confirmation was most common among E. coli (62.2% of E. coli isolates tested) and Enterobacter spp (61.4% of Enterobacter isolates tested) (Fig. 1A), and among isolates testing resistant to ertapenem by the clinical laboratory ATI (52.1%, Fig. 1B). Of the 1,388 isolates resistant to ertapenem in the clinical laboratory, 1,006 (72.5%) were resistant only to ertapenem. Of the 855 nonconfirming isolates, 638 (74.6%) were resistant only to ertapenem based on clinical laboratory ATI MICs. Conclusions: Nonconfirming isolates were widespread across laboratories and ATIs. Lack of confirmation was most common among E. coli and Enterobacter spp. Among nonconfirming isolates, most were resistant only to ertapenem. These findings may suggest that ATIs overcall resistance to ertapenem or that isolate transport and storage conditions affect ertapenem resistance. Further investigation into this lack of confirmation is needed, and CRE case identification in public health surveillance may need to account for this phenomenon.
Antarctic toothfish Dissostichus mawsoni and Weddell seals Leptonychotes weddellii are important mesopredators in the waters of the Antarctic continental shelf. They compete with each other for prey, yet the seals also prey upon toothfish. Such intraguild predation means that prevalence and respective demographic rates may be negatively correlated, but quantification is lacking. Following a review of their natural histories, we initiate an approach to address this deficiency by analysing scientific fishing catch per unit effort (CPUE; 1975–2011 plus sporadic effort to 2018) in conjunction with an annual index of seal abundance in McMurdo Sound, Ross Sea. We correlated annual variation in scientific CPUE to seal numbers over a 43 year period (1975–2018), complementing an earlier study in the same locality showing CPUE to be negatively correlated with spatial proximity to abundant seals. The observed relationship (more seals with lower CPUE, while controlling for annual trends in each) indicates the importance of toothfish as a dietary item to Weddell seals and highlights the probable importance of intra- and inter-specific competition as well as intraguild predation in seal-toothfish dynamics. Ultimately, it may be necessary to supplement fishery management with targeted ecosystem monitoring to prevent the fishery from having adverse effects on dependent species.
Previous genetic association studies have failed to identify loci robustly associated with sepsis, and there have been no published genetic association studies or polygenic risk score analyses of patients with septic shock, despite evidence suggesting genetic factors may be involved. We systematically collected genotype and clinical outcome data in the context of a randomized controlled trial from patients with septic shock to enrich the presence of disease-associated genetic variants. We performed genomewide association studies of susceptibility and mortality in septic shock using 493 patients with septic shock and 2442 population controls, and polygenic risk score analysis to assess genetic overlap between septic shock risk/mortality with clinically relevant traits. One variant, rs9489328, located in AL589740.1 noncoding RNA, was significantly associated with septic shock (p = 1.05 × 10–10); however, it is likely a false-positive. We were unable to replicate variants previously reported to be associated (p < 1.00 × 10–6 in previous scans) with susceptibility to and mortality from sepsis. Polygenic risk scores for hematocrit and granulocyte count were negatively associated with 28-day mortality (p = 3.04 × 10–3; p = 2.29 × 10–3), and scores for C-reactive protein levels were positively associated with susceptibility to septic shock (p = 1.44 × 10–3). Results suggest that common variants of large effect do not influence septic shock susceptibility, mortality and resolution; however, genetic predispositions to clinically relevant traits are significantly associated with increased susceptibility and mortality in septic individuals.
Lewy body dementia, consisting of both dementia with Lewy bodies (DLB) and Parkinson's disease dementia (PDD), is considerably under-recognised clinically compared with its frequency in autopsy series.
This study investigated the clinical diagnostic pathways of patients with Lewy body dementia to assess if difficulties in diagnosis may be contributing to these differences.
We reviewed the medical notes of 74 people with DLB and 72 with non-DLB dementia matched for age, gender and cognitive performance, together with 38 people with PDD and 35 with Parkinson's disease, matched for age and gender, from two geographically distinct UK regions.
The cases of individuals with DLB took longer to reach a final diagnosis (1.2 v. 0.6 years, P = 0.017), underwent more scans (1.7 v. 1.2, P = 0.002) and had more alternative prior diagnoses (0.8 v. 0.4, P = 0.002), than the cases of those with non-DLB dementia. Individuals diagnosed in one region of the UK had significantly more core features (2.1 v. 1.5, P = 0.007) than those in the other region, and were less likely to have dopamine transporter imaging (P < 0.001). For patients with PDD, more than 1.4 years prior to receiving a dementia diagnosis: 46% (12 of 26) had documented impaired activities of daily living because of cognitive impairment, 57% (16 of 28) had cognitive impairment in multiple domains, with 38% (6 of 16) having both, and 39% (9 of 23) already receiving anti-dementia drugs.
Our results show the pathway to diagnosis of DLB is longer and more complex than for non-DLB dementia. There were also marked differences between regions in the thresholds clinicians adopt for diagnosing DLB and also in the use of dopamine transporter imaging. For PDD, a diagnosis of dementia was delayed well beyond symptom onset and even treatment.
OBJECTIVES/GOALS: Peri-implantitis is the inflammation of peri-implant mucosa and subsequent loss of supporting bone. Its treatment is only <40% successful mainly due to persistent bacterial infection. The goal of this project is to increase success rates by developing a robust antibiofilm multi-biomolecular membrane that can be placed around implant surfaces. METHODS/STUDY POPULATION: A collagen membrane was soaked in the antimicrobial peptide GL13K solution overnight to form an interpenetrating fibrillary network. The nanostructure of the membrane was imaged with scanning electron microscope (SEM). The hydrophobicity of the membrane was analyzed by water contact angle (WCA) measurements. The biodegradability was tested in a 0.01 mg/mL Type I collagenase solution for up to 5 weeks. The antimicrobial activity of the membrane was assessed with Gram-positive oral bacteria Streptococcus gordonii. The cytotoxicity was evaluated by culturing human gingival fibroblasts (HGF), and the osteogenesis was assessed using preosteoblasts MC3T3. Pure collagen membrane was used as the control. Statistical significance (p<0.05) was determined by one-way ANOVA with Tukey’s HSD test. RESULTS/ANTICIPATED RESULTS: The antimicrobial peptide GL13K self-assembled to short fibrils (< 1 µm long), which entangled with the larger collagen fibers (around 200 nm in diameter). The collagen fibers presented characteristic periodic banding structures, which provided biomimetic cues for cell behavior as extracellular matrix. The interpenetrated GL13K fibrils turned the highly hydrophilic collagen membrane to a hydrophobic membrane (WCA = 135 °) and significantly reduced the rate of degradation by collagenases. The developed membrane was efficient in preventing the attachment of S. gordonii. A large portion of the attached bacteria was killed on the surface of the membrane. The incorporation of GL13K did not affect the cytocompatibility of the membrane for HGF. DISCUSSION/SIGNIFICANCE OF IMPACT: We developed an antibiofilm membrane with interpenetrating collagen and antimicrobial peptide fibrils. The strong antimicrobial activity and low cytotoxicity support its further translational evaluation as scaffolds for increasing success rate in treating peri-implantitis.
Why patients with psychosis use cannabis remains debated. The self-medication hypothesis has received some support but other evidence points towards an alleviation of dysphoria model. This study investigated the reasons for cannabis use in first-episode psychosis (FEP) and whether strength in their endorsement changed over time.
FEP inpatients and outpatients at the South London and Maudsley, Oxleas and Sussex NHS Trusts UK, who used cannabis, rated their motives at baseline (n = 69), 3 months (n = 29) and 12 months (n = 36). A random intercept model was used to test the change in strength of endorsement over the 12 months. Paired-sample t-tests assessed the differences in mean scores between the five subscales on the Reasons for Use Scale (enhancement, social motive, coping with unpleasant affect, conformity and acceptance and relief of positive symptoms and side effects), at each time-point.
Time had a significant effect on scores when controlling for reason; average scores on each subscale were higher at baseline than at 3 months and 12 months. At each time-point, patients endorsed ‘enhancement’ followed by ‘coping with unpleasant affect’ and ‘social motive’ more highly for their cannabis use than any other reason. ‘Conformity and acceptance’ followed closely. ‘Relief of positive symptoms and side effects’ was the least endorsed motive.
Patients endorsed their reasons for use at 3 months and 12 months less strongly than at baseline. Little support for the self-medication or alleviation of dysphoria models was found. Rather, patients rated ‘enhancement’ most highly for their cannabis use.
Carrot weevil, Listronotus oregonensis (LeConte) (Coleoptera: Curculionidae), is a pest of carrot (Daucus carota var. sativus Hoffmann; Apiaceae) throughout eastern Canada. Carrot weevil emergence and oviposition were monitored in commercial carrot fields in Nova Scotia. Cumulative degree days were calculated using a base temperature of 7 °C (DD7), and models were developed to predict cumulative emergence and oviposition using nonlinear regression. Cumulative emergence and oviposition were adequately explained as functions of DD7 by a three-parameter sigmoidal Hill equation. Our emergence model predicted initial and peak adult emergence at 35 and 387 DD7, respectively, with oviposition on carrot baits occurring as early as 42 DD7. Models were then validated to evaluate how well they performed. Oviposition on carrot plants began at the fourth true-leaf stage (342 DD7) and continued until eleventh true-leaf stage. Growers using these models can identify their window of opportunity to manage their carrot weevil populations targeting the majority of emerged adults before oviposition begins in the field.