We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Introduction: Clinical assessment of patients with mTBI is challenging and overuse of head CT in the emergency department (ED) is a major problem. During the last decades, studies have attempted to reduce unnecessary head CTs following a mTBI by identifying new tools aiming to predict intracranial bleeding. S100B serum protein level might be helpful reducing those imaging since a higher level of S-100B protein has been associated with intracranial hemorrhage following a mTBI in previous literature. The main objective of this study was to assess whether the S100B serum protein level is associated with clinically important brain injury and could be used to reduce the number of head CT following a mTBI. Methods: This prospective multicenter cohort study was conducted in five Canadian ED. MTBI patients with a Glasgow Coma Scale (GCS) score of 13-15 in the ED and a blood sample drawn within 24-hours after the injury were included. S-100B protein was analyzed using enzyme-linked immunosorbent assay (ELISA). All types of intracranial bleedings were reviewed by a radiologist who was blinded to the biomarker results. The main outcome was the presence of clinically important brain injury. Results: A total of 476 patients were included. Mean age was 41 ± 18 years old and 150 (31.5%) were female. Twenty-four (5.0%) patients had a clinically significant intracranial hemorrhage while 37 (7.8%) had any type of intracranial bleeding. S100B median value (Q1-Q3) of was: 0.043 ug/L (0.008-0.080) for patients with clinically important brain injury versus 0.039 μg/L (0.023-0.059) for patients without clinically important brain injury. Sensitivity and specificity of the S100B protein level, if used alone to detect clinically important brain injury, were 16.7% (95% CI 4.7-37.4) and 88.5% (95% CI 85.2-91.3), respectively. Conclusion: S100B serum protein level was not associated with clinically significant intracranial hemorrhage in mTBI patients. This protein did not appear to be useful to reduce the number of CT prescribed in the ED and would have missed many clinically important brain injuries. Future research should focus on different ways to assess mTBI patient and ultimately reduce unnecessary head CT.
It is widely assumed that celebrities are imbued with political capital and the power to move opinion. To understand the sources of that capital in the specific domain of sports celebrity, we investigate the popularity of global soccer superstars. Specifically, we examine players’ success in the Ballon d’Or—the most high-profile contest to select the world’s best player. Based on historical election results as well as an original survey of soccer fans, we find that certain kinds of players are significantly more likely to win the Ballon d’Or. Moreover, we detect an increasing concentration of votes on these kinds of players over time, suggesting a clear and growing hierarchy in the competition for soccer celebrity. Further analyses of support for the world’s two best players in 2016 (Lionel Messi and Cristiano Ronaldo) show that, if properly adapted, political science concepts like partisanship have conceptual and empirical leverage in ostensibly non-political contests.
Sorghum (Sorghum bicolor (L.) Moench) is an important resource to the national economy and it is essential to assess the genetic diversity in existing sorghum germplasm for better conservation, utilization and crop improvement. The aim of this study was to evaluate the level of genetic diversity within and among sorghum germplasms collected from diverse institutes in Nigeria and Mali using Single Nucleotide Polymorphic markers. Genetic diversity among the germplasm was low with an average polymorphism information content value of 0.24. Analysis of Molecular Variation revealed 6% variation among germplasm and 94% within germplasms. Dendrogram revealed three groups of clustering which indicate variations within the germplasms. Private alleles identified in the sorghum accessions from National Center for Genetic Resources and Biotechnology, Ibadan, Nigeria and International Crop Research Institute for the Semi-Arid Tropics, Kano, Nigeria shows their prospect for sorghum improvement and discovery of new agronomic traits. The presence of private alleles and genetic variation within the germplasms indicates that the accessions are valuable resources for future breeding programs.
Few personalised medicine investigations have been conducted for mental health. We aimed to generate and validate a risk tool that predicts adult attention-deficit/hyperactivity disorder (ADHD).
Methods
Using logistic regression models, we generated a risk tool in a representative population cohort (ALSPAC – UK, 5113 participants, followed from birth to age 17) using childhood clinical and sociodemographic data with internal validation. Predictors included sex, socioeconomic status, single-parent family, ADHD symptoms, comorbid disruptive disorders, childhood maltreatment, ADHD symptoms, depressive symptoms, mother's depression and intelligence quotient. The outcome was defined as a categorical diagnosis of ADHD in young adulthood without requiring age at onset criteria. We also tested Machine Learning approaches for developing the risk models: Random Forest, Stochastic Gradient Boosting and Artificial Neural Network. The risk tool was externally validated in the E-Risk cohort (UK, 2040 participants, birth to age 18), the 1993 Pelotas Birth Cohort (Brazil, 3911 participants, birth to age 18) and the MTA clinical sample (USA, 476 children with ADHD and 241 controls followed for 16 years from a minimum of 8 and a maximum of 26 years old).
Results
The overall prevalence of adult ADHD ranged from 8.1 to 12% in the population-based samples, and was 28.6% in the clinical sample. The internal performance of the model in the generating sample was good, with an area under the curve (AUC) for predicting adult ADHD of 0.82 (95% confidence interval (CI) 0.79–0.83). Calibration plots showed good agreement between predicted and observed event frequencies from 0 to 60% probability. In the UK birth cohort test sample, the AUC was 0.75 (95% CI 0.71–0.78). In the Brazilian birth cohort test sample, the AUC was significantly lower –0.57 (95% CI 0.54–0.60). In the clinical trial test sample, the AUC was 0.76 (95% CI 0.73–0.80). The risk model did not predict adult anxiety or major depressive disorder. Machine Learning approaches did not outperform logistic regression models. An open-source and free risk calculator was generated for clinical use and is available online at https://ufrgs.br/prodah/adhd-calculator/.
Conclusions
The risk tool based on childhood characteristics specifically predicts adult ADHD in European and North-American population-based and clinical samples with comparable discrimination to commonly used clinical tools in internal medicine and higher than most previous attempts for mental and neurological disorders. However, its use in middle-income settings requires caution.
Introduction: Low acuity patients have been controversially tagged as a source of emergency department (ED) misuse. Authorities for many Canadian health regions have set up policies so these patients preferably present to walk-in clinics (WIC). We compared the cost and quality of the care given to low acuity patients in an academic ED and a WIC of Québec City during fiscal year 2015-16. Methods: We conducted an ambidirectional (prospective and retrospective) cohort study using a time-driven activity-based costing method. This method uses duration of care processes (e.g., triage) to allocate to patient care all direct costs (e.g., personnel, consumables), overheads (e.g., building maintenance) and physician charges. We included consecutive adult patients, ambulatory at all time and discharged from the ED or WIC with a diagnosis of upper respiratory tract infection (URTI), urinary tract infection (UTI) or low back pain. Mean cost [95%CI] per patient per condition was compared between settings after risk-adjustment for age, sex, vital signs, number of regular medications and co-morbidities using generalized log-gamma regression models. Proportions [95%CI] of antibiotic prescription and chest X-Ray use in URTI, compliance with provincial guidelines on use of antibiotics in UTI, and column X-Ray use in low back pain were compared between settings using a Pearson Chi-Square test. Results: A total of 409 patients were included. ED and WIC groups were similar in terms of age, sex and vital signs on presentation, but ED patients had a greater burden of comorbidities. Adjusted mean cost (2016 CAN$) of care was significantly higher in the ED than in the WIC (p < 0.0001) for URTI (78.42[64.85-94.82] vs. 59.43[50.43-70.06]), UTI (78.88[69.53-89.48] vs. 53.29[43.68-65.03]), and low back pain (87.97[68.30-113.32] vs. 61.71[47.90-79.51]). For URTI, antibiotics were more frequently prescribed in the WIC (44.1%[34.3-54.3] vs. 5.8%[1.2-16.0]; p < 0.0001) and chest X-Rays, more frequently used in the ED (26.9%[15.6-41.0] vs. 13.7%[7.7-22.0]; p = 0.05). No significant differences were observed in the compliance with guidelines on use of antibiotics in UTI and in the use of column X-Ray in low back pain. Conclusion: Total cost of care for low acuity patients is lower in walk-in clinics than in EDs. However, our results suggest that quality-of-care issues should be considered in determining the best alternate setting for treating ambulatory emergency patients.
The hypothalamic–pituitary–adrenal axis (HPAA) plays a critical role in the functioning of all other biological systems. Thus, studying how the environment may influence its ontogeny is paramount to understanding developmental origins of health and disease. The early post-conceptional (EPC) period could be particularly important for the HPAA as the effects of exposures on organisms’ first cells can be transmitted through all cell lineages. We evaluate putative relationships between EPC maternal cortisol levels, a marker of physiologic stress, and their children’s pre-pubertal HPAA activity (n=22 dyads). Maternal first-morning urinary (FMU) cortisol, collected every-other-day during the first 8 weeks post-conception, was associated with children’s FMU cortisol collected daily around the start of the school year, a non-experimental challenge, as well as salivary cortisol responses to an experimental challenge (all Ps<0.05), with some sex-related differences. We investigated whether epigenetic mechanisms statistically mediated these links and, therefore, could provide cues as to possible biological pathways involved. EPC cortisol was associated with >5% change in children’s buccal epithelial cells’ DNA methylation for 867 sites, while children’s HPAA activity was associated with five CpG sites. Yet, no CpG sites were related to both, EPC cortisol and children’s HPAA activity. Thus, these epigenetic modifications did not statistically mediate the observed physiological links. Larger, prospective peri-conceptional cohort studies including frequent bio-specimen collection from mothers and children will be required to replicate our analyses and, if our results are confirmed, identify biological mechanisms mediating the statistical links observed between maternal EPC cortisol and children’s HPAA activity.
Objectives: Studies suggest that impairments in some of the same domains of cognition occur in different neuropsychiatric conditions, including those known to share genetic liability. Yet, direct, multi-disorder cognitive comparisons are limited, and it remains unclear whether overlapping deficits are due to comorbidity. We aimed to extend the literature by examining cognition across different neuropsychiatric conditions and addressing comorbidity. Methods: Subjects were 486 youth consecutively referred for neuropsychiatric evaluation and enrolled in the Longitudinal Study of Genetic Influences on Cognition. First, we assessed general ability, reaction time variability (RTV), and aspects of executive functions (EFs) in youth with non-comorbid forms of attention-deficit/hyperactivity disorder (ADHD), mood disorders and autism spectrum disorder (ASD), as well as in youth with psychosis. Second, we determined the impact of comorbid ADHD on cognition in youth with ASD and mood disorders. Results: For EFs (working memory, inhibition, and shifting/ flexibility), we observed weaknesses in all diagnostic groups when participants’ own ability was the referent. Decrements were subtle in relation to published normative data. For RTV, weaknesses emerged in youth with ADHD and mood disorders, but trend-level results could not rule out decrements in other conditions. Comorbidity with ADHD did not impact the pattern of weaknesses for youth with ASD or mood disorders but increased the magnitude of the decrement in those with mood disorders. Conclusions: Youth with ADHD, mood disorders, ASD, and psychosis show EF weaknesses that are not due to comorbidity. Whether such cognitive difficulties reflect genetic liability shared among these conditions requires further study. (JINS, 2018, 24, 91–103)
Introduction: Redirecting low acuity patients from emergency departments to primary care walk-in clinics has been identified as a priority by many health authorities. Promoting family physicians for the management of ambulatory patients with urgent health concerns reflects the assumption that primary care facilities can offer high-quality and more affordable ambulatory emergency care. However, no performance assessment framework has been developed for ambulatory emergency care and consequently, quality of care provided in these alternate settings has never been formally compared. Primary objective: To identify structure, process and outcome indicators for ambulatory emergency care. Methods: We will identify and develop quality indicators (QIs) for ambulatory emergency care using a RAND/UCLA Appropriateness Method (RAM) composed of three different steps. First, we will perform a scoping literature review to inventory 1) all previously recommended QIs assessing care provided to ambulatory emergency patients in the ED or the primary care settings; 2) all conditions evaluated with the retrieved QIs; and 3) all outcomes measured by the same QIs. Second, a steering committee composed of the research team and of international experts in performance assessment in emergency and primary care will be presented with the lists of QI-related conditions and outcomes. They will be asked to identify potential outcome indicators for ambulatory emergency care by generating any relevant combinations of one condition and one outcome (e.g. acute asthma exacerbation/re-consultation). Committee members will be given the latitude to use and pair any conditions or outcomes not included in the lists as long as they think the resulting indicators are compatible with the study objectives. Using a structured nominal group approach, they will combine their suggestions and refine the list of potential QIs. This list of potential outcome indicators composed of pairs “condition/outcome” will be merged with the list of already published QIs identified during the literature review. Third, as per the RAM standards, we will assemble an international multidisciplinary panel (n=20) of patients, emergency and primary care providers, researchers and decision makers, after recommendations from international emergency and primary care associations, and from the Canadian Strategy for Patient-Oriented Research (SPOR) Support Units. Through iterative rounds of ratings using both web-based survey tools and videoconferencing, panelists will independently assess all candidate QIs. They will be asked to rate on a nine-level scale to what extent each QI is a relevant and useful measure of ambulatory emergency care quality. From one round to the next, QIs with a median panelist rating score of one to three will be excluded. Those with a median score of seven or more will be automatically included in the final list. QIs with median score of four to six will be retained for future deliberations among the panelists. Rounds of ratings will be conducted until all QIs are classified. Impact: The QIs identified will be used to develop a performance assessment framework for ambulatory emergency care. This will represent an essential step toward testing the assumption that EDs and primary care walk-in clinics provide equivalent care quality to low acuity patients.
Introduction: In its prospective cohorts of independent seniors with minor injuries, the CETIe (Canadian Emergency Team Initiative) has shown that minor injuries trigger a spiral of mobility and functional decline in 18% of those seniors up to 6 months post-injury. Because of their effects on multiple physiological systems, multicomponent mobility interventions with physical exercises are among the best methods to limit frailty and improve mobility & function in seniors. Methods: Pilot clinical trial among 4 groups of seniors, discharged home post-ED consultation for minor injuries. Interventions: 2x 1 hour /week/12 weeks with muscle strengthening, functional and balance exercises under kinesiology supervision either at home (Jintronix tele-rehabilitation platform) or at community-based programs (YWCA, PIED) vs usual ED-discharge (CONTROL). Measures: Functional Status in ADLs (Older American Ressources Scale); Global physical & social functioning (SF-12 questionnaire), physical activity level (RAPA questionnaire) at initial ED visit and at 3 months. Results: 135 seniors were included (Controls: n=50; PIED: n=28; Jintronix: n=27; YWCA: n=18). Mean age was 72.6±6.2 years, 45% were prefrail, 86% and 8% had a fall or motor vehicle-related injuries (e.g. fractures: 30%; contusions: 37%). Intervention could start as early as 7 days post-injury. Seniors in interventions (Home, YWCA or PIED) maintained or improved their functional status (84% vs 60%, p≤0.05), their physical (73% vs 59%, p=0.05) and social (45% vs 23%, p≤0.05) functioning. While 21% of CONTROLs improved their physical activity level three months post-injury, 46% of seniors in intervention did (p≤0.05). Conclusion: Exercises-based interventions can help improve seniors’ function and mobility after a minor injury.
Background: The degree of overlap between schizophrenia (SCZ) and affective psychosis (AFF) has been a recurring question since Kraepelin’s subdivision of the major psychoses. Studying nonpsychotic relatives allows a comparison of disorder-associated phenotypes, without potential confounds that can obscure distinctive features of the disorder. Because attention and working memory have been proposed as potential endophenotypes for SCZ and AFF, we compared these cognitive features in individuals at familial high-risk (FHR) for the disorders. Methods: Young, unmedicated, first-degree relatives (ages, 13–25 years) at FHR-SCZ (n=41) and FHR-AFF (n=24) and community controls (CCs, n=54) were tested using attention and working memory versions of the Auditory Continuous Performance Test. To determine if schizotypal traits or current psychopathology accounted for cognitive deficits, we evaluated psychosis proneness using three Chapman Scales, Revised Physical Anhedonia, Perceptual Aberration, and Magical Ideation, and assessed psychopathology using the Hopkins Symptom Checklist -90 Revised. Results: Compared to controls, the FHR-AFF sample was significantly impaired in auditory vigilance, while the FHR-SCZ sample was significantly worse in working memory. Both FHR groups showed significantly higher levels of physical anhedonia and some psychopathological dimensions than controls. Adjusting for physical anhedonia, phobic anxiety, depression, psychoticism, and obsessive-compulsive symptoms eliminated the FHR-AFF vigilance effects but not the working memory deficits in FHR-SCZ. Conclusions: The working memory deficit in FHR-SZ was the more robust of the cognitive impairments after accounting for psychopathological confounds and is supported as an endophenotype. Examination of larger samples of people at familial risk for different psychoses remains necessary to confirm these findings and to clarify the role of vigilance in FHR-AFF. (JINS, 2016, 22, 1026–1037)
About 20–90% of the world's population has had contact with Toxoplasma gondii parasites. The aim of this study was to determine the seroprevalence and risk factors associated with T. gondii infection in the Central Region, Ghana. A community-based cross-sectional study was conducted in three selected communities. Serum samples were tested for the presence of anti-T. gondii IgG and IgM antibodies by ELISA. A serological criterion for seropositivity was a positive test result for any of the two anti-Toxoplasma IgG or IgM antibodies or a combination of both. In all, 390 participants of mean age 47·0 years consisting of 118 (30·3%) males and 272 (69·7%) females were tested. The overall seroprevalence of T. gondii was 85% (333/390) where fishermen, farmers and fishmongers, respectively, had the highest seropositivity. IgG and IgM antibodies were detected in 329 (84%) and 25 (6%), respectively, while both IgG and IgM antibodies were detected in 21 (5%) of the participants. Respectively, 1% (4/390) and 79% (308/390) of participants tested positive for IgM-only and IgG-only antibodies. There was a significant relationship between Toxoplasma seropositivity and contact with soil, presence of a cat in the surrounding area, age, sources of drinking water, level of formal education, and socioeconomic status. The results suggest that the seashore may serve as a good ground for sporulation and survival of Toxoplasma oocysts.
We examine prospectively the influence of two separate but potentially inter-related factors in the etiology of post-traumatic stress disorder (PTSD): childhood maltreatment as conferring a susceptibility to the PTSD response to adult trauma and juvenile disorders as precursors of adult PTSD.
Method
The Dunedin Multidisciplinary Health and Development Study (DMHDS) is a birth cohort (n = 1037) from the general population of New Zealand's South Island, with multiple assessments up to age 38 years. DSM-IV PTSD was assessed among participants exposed to trauma at ages 26–38. Complete data were available on 928 participants.
Results
Severe maltreatment in the first decade of life, experienced by 8.5% of the sample, was associated significantly with the risk of PTSD among those exposed to adult trauma [odds ratio (OR) 2.64, 95% confidence interval (CI) 1.16–6.01], compared to no maltreatment. Moderate maltreatment, experienced by 27.2%, was not associated significantly with that risk (OR 1.55, 95% CI 0.85–2.85). However, the two estimates did not differ significantly from one another. Juvenile disorders (ages 11–15), experienced by 35% of the sample, independent of childhood maltreatment, were associated significantly with the risk of PTSD response to adult trauma (OR 2.35, 95% CI 1.32–4.18).
Conclusions
Severe maltreatment is associated with risk of PTSD response to adult trauma, compared to no maltreatment, and juvenile disorders, independent of earlier maltreatment, are associated with that risk. The role of moderate maltreatment remains unresolved. Larger longitudinal studies are needed to assess the impact of moderate maltreatment, experienced by the majority of adult trauma victims with a history of maltreatment.
Persons developing schizophrenia (SCZ) manifest various pre-morbid neuropsychological deficits, studied most often by measures of IQ. Far less is known about pre-morbid neuropsychological functioning in individuals who later develop bipolar psychoses (BP). We evaluated the specificity and impact of family history (FH) of psychosis on pre-morbid neuropsychological functioning.
Method
We conducted a nested case-control study investigating the associations of neuropsychological data collected systematically at age 7 years for 99 adults with psychotic diagnoses (including 45 SCZ and 35 BP) and 101 controls, drawn from the New England cohort of the Collaborative Perinatal Project (CPP). A mixed-model approach evaluated full-scale IQ, four neuropsychological factors derived from principal components analysis (PCA), and the profile of 10 intelligence and achievement tests, controlling for maternal education, race and intra-familial correlation. We used a deviant responder approach (<10th percentile) to calculate rates of impairment.
Results
There was a significant linear trend, with the SCZ group performing worst. The profile of childhood deficits for persons with SCZ did not differ significantly from BP. Neuropsychological impairment was identified in 42.2% of SCZ, 22.9% of BP and 7% of controls. The presence of psychosis in first-degree relatives (FH+) significantly increased the severity of childhood impairment for SCZ but not for BP.
Conclusions
Pre-morbid neuropsychological deficits are found in a substantial proportion of children who later develop SCZ, especially in the SCZ FH+ subgroup, but less so in BP, suggesting especially impaired neurodevelopment underlying cognition in pre-SCZ children. Future work should assess genetic and environmental factors that explain this FH effect.
Background. Suicide is a common cause of death in anorexia nervosa and suicide attempts occur often in both anorexia nervosa and bulimia nervosa. No studies have examined predictors of suicide attempts in a longitudinal study of eating disorders with frequent follow-up intervals. The objective of this study was to determine predictors of serious suicide attempts in women with eating disorders.
Method. In a prospective longitudinal study, women diagnosed with either DSM-IV anorexia nervosa (n=136) or bulimia nervosa (n=110) were interviewed and assessed for suicide attempts and suicidal intent every 6–12 months over 8·6 years.
Results. Fifteen percent of subjects reported at least one prospective suicide attempt over the course of the study. Significantly more anorexic (22·1%) than bulimic subjects (10·9%) made a suicide attempt. Multivariate analyses indicated that the unique predictors of suicide attempts for anorexia nervosa included the severity of both depressive symptoms and drug use over the course of the study. For bulimia nervosa, a history of drug use disorder at intake and the use of laxatives during the study significantly predicted suicide attempts.
Conclusions. Women with anorexia nervosa or bulimia nervosa are at considerable risk to attempt suicide. Clinicians should be aware of this risk, particularly in anorexic patients with substantial co-morbidity.
For the purpose of observing β Cephei and SPB stars with optical telescopes on board future space missions, we have developed an automatic photometric search of stars falling inside the β Cephei and SPB instability strips. A list has been compiled which is available upon request from the authors (blay@castor.daa.uv.es;juan@pleione.daa.uv.es).
The boundaries of the instability strips have been defined by observational β – c0 HR diagrams based on Stræmgren photometry (Sterken & Jerzykiewicz 1993 = SK93). The ZAMS has been set as the lower limit of both strips. Low and high temperature limits are set as those given by SK93 and the upper limit is fixed by the photometry β of Cep stars found in the literature. For the SPB stars we have followed the same procedure. We have found the following boundaries: β Cephei: 0.2295c0 + 2.6263 > β > 0.0394c0 + 2.5656; 0.17 > c0 > –0.1; SPB: 2.630 > β > 0.2642co + 2.6207; 0.8 > c0 > 0.17.
As a part of a survey to study the health and living conditions of the elderly population, a random sample of residents aged 65 and over are examined using the Clinical Interview Schedule (CIS) in order to evaluate their psychiatric status. The aim of this study is to evaluate this standard method of assessment as a case-identification instrument in our country. The schedule was completed by 91 subjects. It is easily administered, easily scored, and economical on time. Its completion rate is high. The weighted total scores (WTS) range from 0 to 48. Using the case criteria defined by Cooper & Schwarz (1982), 27 subjects (30%) are considered cases and 64 (70%) are regarded as non-cases. The sensitivity coefficients for the WTS are examined against the overall severity rating at different cut-off points. The optimum cut-off can be anywhere between 16 and 20 points. The WTS has higher validity coefficients to detect the following diagnostic categories (sensitivity, specificity): normals (100%, −); personality (100%, 92%) and affective disorders (100%, 75%). In general the CIS items are given low ratings. Psychotic symptoms are rarely found in this sample. One main problem arose: the item depersonalization is misunderstood by some patients probably because of interpreting it as an upsetting memory disturbance.
Elections as the Product of Collective Decision: Federal Elections from 1957 to 1965 in Quebec
Is it fruitful to consider electoral results as the product of collective decisions, as suggested by Vincent Lemieux in his study of provincial elections in Quebec? In order to answer this question, the authors applied the same approach to new data, the results of federal elections in Quebec from 1957 to 1965. It is first noted that the operational concept of collective decision should be linked to a theoretical one. It is suggested that this theoretical concept would be more meaningful if a structural approach was adopted: it could be an “inconscient collectif” influencing the ridings in one way or another. In this sense the ridings are facing many choices: they may tend to choose a certain party, to vote for the party in government (or the contrary), or they may wish to re-elect the same party (or the contrary); finally, there might be a tendency to vote like the majority of the ridings (or, once again, the contrary). The comparison between expected and actual frequencies of the different sets of partisan choices argues in favour of the importance of party loyalty. The cleavage between the socio-economic characteristics of the traditionally Liberal, Social Credit, and Conservative ridings is also evident. The analysis of the other choices raises many problems, however. The same electoral result may refer to many different sets of choices and the relationships between these choices make it almost impossible to measure the influence of eac structural mechanism controlling for all the others. It becomes almost impossible to discover which structural law does in fact govern the ridings’ behaviour. On the whole, then, structural and causal studies are faced with the same basic methodological problem: when one wants to measure the sole impact of a particular causal factor or structural mechanism which interacts with many others, it is necessary to make some assumptions, like linearity and additivity, which exclude any complex interaction.
Bucculatrix canadensisella Chambers, an important defoliator of birch trees in Canada, overwinters as a cocoon in forest-floor debris. Cocoons obtained from the field over a 3-year period showed that high mortality attributable to climatic conditions could occur during this stage. Field and laboratory experiments indicated that cocoons were quite resistant to various conditions of moisture and to cold in the fall and through the winter, but that they were highly susceptible to dry conditions at the time of morphogenesis in the spring. Cool temperatures retarded adult emergence; in nature, this would expose cocoons to a prolonged period of predation. It is concluded that weather conditions during a critical period of a few weeks in late spring could greatly affect cocoon survival of this insect. Warm, moist weather would offer optimum conditions while cool and (or) dry weather would result in high mortality.