We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Contemporary proof assistants such as Coq require that recursive functions be terminating and corecursive functions be productive to maintain logical consistency of their type theories, and some ensure these properties using syntactic checks. However, being syntactic, they are inherently delicate and restrictive, preventing users from easily writing obviously terminating or productive functions at their whim.
Meanwhile, there exist many sized type theories that perform type-based termination and productivity checking, including theories based on the Calculus of (Co)Inductive Constructions (CIC), the core calculus underlying Coq. These theories are more robust and compositional in comparison. So why haven’t they been adapted to Coq?
In this paper, we venture to answer this question with CIC
$\widehat{\ast}$
, a sized type theory based on CIC. It extends past work on sized types in CIC with additional Coq features such as global and local definitions. We also present a corresponding size inference algorithm and implement it within Coq’s kernel; for maximal backward compatibility with existing Coq developments, it requires no additional annotations from the user.
In our evaluation of the implementation, we find a severe performance degradation when compiling parts of the Coq standard library, inherent to the algorithm itself. We conclude that if we wish to maintain backward compatibility, using size inference as a replacement for syntactic checking is impractical in terms of performance.
Recent research has shown that risk and reward are positively correlated in many environments, and that people have internalized this association as a “risk-reward heuristic”: when making choices based on incomplete information, people infer probabilities from payoffs and vice-versa, and these inferences shape their decisions. We extend this work by examining people’s expectations about another fundamental trade-off — that between monetary reward and delay. In 2 experiments (total N = 670), we adapted a paradigm previously used to demonstrate the risk-reward heuristic. We presented participants with intertemporal choice tasks in which either the delayed reward or the length of the delay was obscured. Participants inferred larger rewards for longer stated delays, and longer delays for larger stated rewards; these inferences also predicted people’s willingness to take the delayed option. In exploratory analyses, we found that older participants inferred longer delays and smaller rewards than did younger ones. All of these results replicated in 2 large-scale pre-registered studies with participants from a different population (total N = 2138). Our results suggest that people expect intertemporal choice tasks to offer a trade-off between delay and reward, and differ in their expectations about this trade-off. This “delay-reward heuristic” offers a new perspective on existing models of intertemporal choice and provides new insights into unexplained and systematic individual differences in the willingness to delay gratification.
Interactions between shock waves and boundary layers produce flow separations and augmented pressure/thermal loads in hypersonic flight. This study provides details of Mach 7 impinging-shock-flat-plate experiments conducted in the T4 Stalker Tube. Measurements were taken at flow conditions of Mach 7.0 (2.44 MJ kg$^{-1}$) and Mach 7.7 (2.88 MJ kg$^{-1}$) flight enthalpies with a range of freestream unit Reynolds numbers from $1.43 \times 10^{6}$ m$^{-1}$ to $5.01 \times 10^{6}$ m$^{-1}$. A shock generator at $12^{\circ }$ or $16^{\circ }$ to the freestream created an oblique shock which impinged on a boundary layer over a flat plate to induce flow separation. The flow field was examined using simultaneous measurements of wall static pressure, heat transfer and schlieren visualisation. Measured heat transfer along the flat plate without the shock impingement indicated that the boundary layer remained laminar for all flow conditions. The shock impingement flow field was successfully established within the facility test duration. The onset of separation was observed by a rise in wall pressure and a decrease in heat transfer at the location corresponding to the stem of the separation shock. Downstream of this initial rise, an increased pressure and higher heating loads were observed. The heat-transfer levels also indicated an immediate boundary layer transition due to the shock impingement. The separation data of the present work showed good agreement with our previous work on shock impingement on heated walls (Chang et al., J. Fluid Mech., vol. 908, 2021, pp. 1–13). A comparison with the previous scaling indicated that the separation also relates to the pressure ratio and the wall temperature parameter.
Despite reports of an elevated risk of breast cancer associated with antipsychotic use in women, existing evidence remains inconclusive. We aimed to examine existing observational data in the literature and determine this hypothesised association.
Methods
We searched Embase, PubMed and Web of Science™ databases on 27 January 2022 for articles reporting relevant cohort or case-control studies published since inception, supplemented with hand searches of the reference lists of the included articles. Quality of studies was assessed using the Newcastle-Ottawa Scale. We generated the pooled odds ratio (OR) and pooled hazard ratio (HR) using a random-effects model to quantify the association. This study was registered with PROSPERO (CRD42022307913).
Results
Nine observational studies, including five cohort and four case-control studies, were eventually included for review (N = 2 031 380) and seven for meta-analysis (N = 1 557 013). All included studies were rated as high-quality (seven to nine stars). Six studies reported a significant association of antipsychotic use with breast cancer, and a stronger association was reported when a greater extent of antipsychotic use, e.g. longer duration, was operationalised as the exposure. Pooled estimates of HRs extracted from cohort studies and ORs from case-control studies were 1.39 [95% confidence interval (CI) 1.11–1.73] and 1.37 (95% CI 0.90–2.09), suggesting a moderate association of antipsychotic use with breast cancer.
Conclusions
Antipsychotic use is moderately associated with breast cancer, possibly mediated by prolactin-elevating properties of certain medications. This risk should be weighed against the potential treatment effects for a balanced prescription decision.
To describe the evolution of respiratory antibiotic prescribing during the coronavirus disease 2019 (COVID-19) pandemic across 3 large hospitals that maintained antimicrobial stewardship services throughout the pandemic.
Design:
Retrospective interrupted time-series analysis.
Setting:
A multicenter study was conducted including medical and intensive care units (ICUs) from 3 hospitals within a Canadian epicenter for COVID-19.
Methods:
Interrupted time-series analysis was used to analyze rates of respiratory antibiotic utilization measured in days of therapy per 1,000 patient days (DOT/1,000 PD) in medical units and ICUs. Each of the first 3 waves of the pandemic were compared to the baseline.
Results:
Within the medical units, use of respiratory antibiotics increased during the first wave of the pandemic (rate ratio [RR], 1.76; 95% CI, 1.38–2.25) but returned to the baseline in waves 2 and 3 despite more COVID-19 admissions. In ICU, the use of respiratory antibiotics increased in wave 1 (RR, 1.30; 95% CI, 1.16–1.46) and wave 2 of the pandemic (RR, 1.21; 95% CI, 1.11–1.33) and returned to the baseline in the third wave, which had the most COVID-19 admissions.
Conclusions:
After an initial surge in respiratory antibiotic prescribing, we observed the normalization of prescribing trends at 3 large hospitals throughout the COVID-19 pandemic. This trend may have been due to the timely generation of new research and guidelines developed with frontline clinicians, allowing for the active application of new research to clinical practice.
Patients with bipolar disorder (BPD) are prone to engage in risk-taking behaviours and self-harm, contributing to higher risk of traumatic injuries requiring medical attention at the emergency room (ER).We hypothesize that pharmacological treatment of BPD could reduce the risk of traumatic injuries by alleviating symptoms but evidence remains unclear. This study aimed to examine the association between pharmacological treatment and the risk of ER admissions due to traumatic injuries.
Methods
Individuals with BPD who received mood stabilizers and/or antipsychotics were identified using a population-based electronic healthcare records database in Hong Kong (2001–2019). A self-controlled case series design was applied to control for time-invariant confounders.
Results
A total of 5040 out of 14 021 adults with BPD who received pharmacological treatment and had incident ER admissions due to traumatic injuries from 2001 to 2019 were included. An increased risk of traumatic injuries was found 30 days before treatment [incidence rate ratio (IRR) 4.44 (3.71–5.31), p < 0.0001]. After treatment initiation, the risk remained increased with a smaller magnitude, before returning to baseline [IRR 0.97 (0.88–1.06), p = 0.50] during maintenance treatment. The direct comparison of the risk during treatment to that before and after treatment showed a significant decrease. After treatment cessation, the risk was increased [IRR 1.34 (1.09–1.66), p = 0.006].
Conclusions
This study supports the hypothesis that pharmacological treatment of BPD was associated with a lower risk of ER admissions due to traumatic injuries but an increased risk after treatment cessation. Close monitoring of symptoms relapse is recommended to clinicians and patients if treatment cessation is warranted.
We evaluated the impact of introducing a mandatory indication field into electronic order entry for targeted antibiotics in adult inpatients.
Design:
Retrospective, before-and-after trial.
Setting:
A 400-bed community hospital.
Interventions:
All adult electronic intravenous (IV) and enteral orders for targeted antibiotics (moxifloxacin, ciprofloxacin, clindamycin, vancomycin, and metronidazole) had a mandatory indication field added. Control antibiotics (amoxicillin-clavulanate, ceftriaxone and piperacillin-tazobactam) were chosen to track shifts in antibiotic prescribing due to the introduction of mandatory indication field.
Methods:
Descriptive statistics were used to summarize the primary outcome, measured in Defined Daily Doses (DDD) per 1000 patient days (PD). Interrupted time-series (ITS) analysis was performed to compare levels and trends in antibiotic usage of targeted and control antibiotics during 24 months before and after the intervention. Additionally, a descriptive analysis of mandatory indication fields for targeted antibiotics in the postintervention period was conducted.
Results:
In total, 4,572 study antibiotic orders were evaluated after the intervention. Preset mandatory indications were selected for 30%–55% of orders. There was decreased usage of targeted antibiotics (mean, 92.02 vs 72.07 DDD/1000-PD) with increased usage of control antibiotics (mean, 102.73 vs 119.91 DDD/1000-PD). ITS analysis showed no statistically significant difference in overall antibiotic usage before and after the intervention for all targeted antibiotics.
Conclusion:
This study showed moderate use of preset mandatory indications, suggesting that the preset list of indications can be optimized. There was no impact on overall antibiotic usage with the use of mandatory indications. More prospective research is needed to study the utility of this intervention in different contexts.
Flocked and foam swabs were used to sample five healthcare pathogens from three sizes of steel and plastic coupons; 26 cm2, 323 cm2, and 645 cm2. As surface area increased, 1–2 log10 decrease in recovered organisms (P < .05) was observed. Sampling 26-cm2 yielded the optimal median percent of pathogens recovered.
This chapter reviews the contribution of the learning sciences to teacher learning research, with particular consideration of how cognitive, sociocognitive, sociocultural, and improvement-focused perspectives extend teacher learning research. Learning scientists study all phases of teacher learning, including preservice education (before becoming a teacher), the first few years of teaching, and ongoing mastery throughout the career. Research often focuses on teaching practices – what teachers do in the classroom and how they move from novice to expert performance. Teacher learning can be supported through collaboration with other teachers in a professional learning community (PLC); through coaching and mentoring; through videos of expert teaching practice; and through educative curriculum materials.
To investigate a cluster of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections in employees working on 1 floor of a hospital administration building.
Methods:
Contact tracing was performed to identify potential exposures and all employees were tested for SARS-CoV-2. Whole-genome sequencing was performed to determine the relatedness of SARS-CoV-2 samples from infected personnel and from control cases in the healthcare system with coronavirus disease 2019 (COVID-19) during the same period. Carbon dioxide levels were measured during a workday to assess adequacy of ventilation; readings >800 parts per million (ppm) were considered an indication of suboptimal ventilation. To assess the potential for airborne transmission, DNA-barcoded aerosols were released, and real-time polymerase chain reaction was used to quantify particles recovered from air samples in multiple locations.
Results:
Between December 22, 2020, and January 8, 2021, 17 coworkers tested positive for SARS-CoV-2, including 13 symptomatic and 4 asymptomatic individuals. Of the 5 cluster SARS-CoV-2 samples sequenced, 3 were genetically related, but these employees denied higher-risk contacts with one another. None of the sequences from the cluster were genetically related to the 17 control sequences of SARS-CoV-2. Carbon dioxide levels increased during a workday but never exceeded 800 ppm. DNA-barcoded aerosol particles were dispersed from the sites of release to locations throughout the floor; 20% of air samples had >1 log10 particles.
Conclusions:
In a hospital administration building outbreak, sequencing of SARS-CoV-2 confirmed transmission among coworkers. Transmission occurred despite the absence of higher-risk exposures and in a setting with adequate ventilation based on monitoring of carbon dioxide levels.
Background: A non-operative approach has been favoured for elderly patients with lumbar spondylolisthesis due to a perceived higher risk with surgery. However, most studies have used an arbitrary age cut-off to define “elderly.” We hypothesized that frailty is an independent predictor of morbidity after surgery for lumbar spondylolisthesis. Methods: The American College of Surgeons National Surgical Quality Improvement Program (NSQIP) database for years 2010 to 2018 was used. Patients who received posterior lumbar spine decompression with or without posterior fusion instrumented fusion for degenerative lumbar spondylolisthesis were included. The primary outcome was major complication. Secondary outcomes were readmission, reoperation, and discharge to location other than home. Logistic regression analysis was done to investigate the association between outcomes and frailty. Results: There were 15 658 patients in this study. The mean age was 62.5 years (SD 12.2). Frailty, as measured by the Modified Frailty Index-5 was significantly associated with increased risk of major complication, unplanned readmission, reoperation, and non-home discharge. Increasing frailty was associated with increasing risk of morbidity. Conclusions: Frailty is independently associated with higher risk of morbidity after posterior surgery in patients with lumbar spondylolisthesis. These data are of significance to clinicians in planning treatment for these patients.
Background: Susac Syndrome (SuS) is a rare autoimmune disorder of the cerebral, retinal, and inner ear microvasculature. One of the cardinal manifestations of central nervous system (CNS) involvement is encephalopathy, however the cognitive profile in SuS is poorly characterized in the literature. Methods: In this cross-sectional case series of seven participants diagnosed with Susac Syndrome in remission in British Columbia, we use a battery of neuropsychological testing, subjective disease scores, and objective markers of disease severity to characterize the affected cognitive domains and determine if any disease characteristics predict neuropsychological performance. We also compare this battery of tests to neuroimaging markers to determine if correlation exists between radiographic markers of CNS disease and clinical evaluation of disease severity. Results: There were a variety of cognitive deficits, with memory and language dysfunction being the most common. Despite the variability, performance on some neuropsychological tests (MoCA) correlated to markers of functional disability (EDSS). Additionally, MoCA and EDSS scores correlated with neuroimaging findings of both corpus callosum and white matter changes. Finally, psychiatric scores correlated with participant reported scores of disease severity. Conclusions: There is a relationship between cognitive deficits, subjective and objective disease disability, and neuroimaging findings in Susac Syndrome.
COVID-19 vaccines are likely to be scarce for years to come. Many countries, from India to the U.K., have demonstrated vaccine nationalism. What are the ethical limits to this vaccine nationalism? Neither extreme nationalism nor extreme cosmopolitanism is ethically justifiable. Instead, we propose the fair priority for residents (FPR) framework, in which governments can retain COVID-19 vaccine doses for their residents only to the extent that they are needed to maintain a noncrisis level of mortality while they are implementing reasonable public health interventions. Practically, a noncrisis level of mortality is that experienced during a bad influenza season, which society considers an acceptable background risk. Governments take action to limit mortality from influenza, but there is no emergency that includes severe lockdowns. This “flu-risk standard” is a nonarbitrary and generally accepted heuristic. Mortality above the flu-risk standard justifies greater governmental interventions, including retaining vaccines for a country's own citizens over global need. The precise level of vaccination needed to meet the flu-risk standard will depend upon empirical factors related to the pandemic. This links the ethical principles to the scientific data emerging from the emergency. Thus, the FPR framework recognizes that governments should prioritize procuring vaccines for their country when doing so is necessary to reduce mortality to noncrisis flu-like levels. But after that, a government is obligated to do its part to share vaccines to reduce risks of mortality for people in other countries. We consider and reject objections to the FPR framework based on a country: (1) having developed a vaccine, (2) raising taxes to pay for vaccine research and purchase, (3) wanting to eliminate economic and social burdens, and (4) being ineffective in combating COVID-19 through public health interventions.
The mental health of slum residents is under-researched globally, and depression is a significant source of worldwide morbidity. Brazil's large slum-dwelling population is often considered part of a general urban-poor demographic. This study aims to identify the prevalence and distribution of depression in Brazil and compare mental health inequalities between slum and non-slum populations.
Methods
Data were obtained from Brazil's 2019 National Health Survey. Slum residence was defined based on the UN-Habitat definition for slums and estimated from survey responses. Doctor-diagnosed depression, Patient Health Questionnaire (PHQ-9)-screened depression and presence of undiagnosed depression (PHQ-9-screened depression in the absence of a doctor's diagnosis) were analysed as primary outcomes, alongside depressive symptom severity as a secondary outcome. Prevalence estimates for all outcomes were calculated. Multivariable logistic regression models were used to investigate the association of socioeconomic characteristics, including slum residence, with primary outcomes. Depressive symptom severity was analysed using generalised ordinal logistic regression.
Results
Nationally, the prevalence of doctor diagnosed, PHQ-9 screened and undiagnosed depression were 9.9% (95% confidence interval (CI): 9.5–10.3), 10.8% (95% CI: 10.4–11.2) and 6.9% (95% CI: 6.6–7.2), respectively. Slum residents exhibited lower levels of doctor-diagnosed depression than non-slum urban residents (8.6%; 95% CI: 7.9–9.3 v. 10.7%; 95% CI: 10.2–11.2), while reporting similar levels of PHQ-9-screened depression (11.3%; 95% CI: 10.4–12.1 v. 11.3%; 95% CI: 10.8–11.8). In adjusted regression models, slum residence was associated with a lower likelihood of doctor diagnosed (adjusted odds ratio (adjusted OR): 0.87; 95% CI: 0.77–0.97) and PHQ-9-screened depression (adjusted OR: 0.87; 95% CI: 0.78–0.97). Slum residents showed a greater likelihood of reporting less severe depressive symptoms. There were significant ethnic/racial disparities in the likelihood of reporting doctor-diagnosed depression. Black individuals were less likely to report doctor-diagnosed depression (adjusted OR: 0.66; 95% CI: 0.57–0.75) than white individuals. A similar pattern was observed in Mixed Black (adjusted OR: 0.72; 95% CI: 0.66–0.79) and other (adjusted OR: 0.63; 95% CI: 0.45–0.88) ethnic/racial groups. Slum residents self-reporting a diagnosis of one or more chronic non-communicable diseases had greater odds of exhibiting all three primary depression outcomes.
Conclusions
Substantial inequalities characterise the distribution of depression in Brazil including in slum settings. People living in slums may have lower diagnosed rates of depression than non-slum urban residents. Understanding the mechanisms behind the discrepancy in depression diagnosis between slum and non-slum populations is important to inform health policy in Brazil, including in addressing potential gaps in access to mental healthcare.
South-east Asia's diverse coastal wetlands, which span natural mudflats and mangroves to man-made salt pans, offer critical habitat for many migratory waterbird species in the East Asian–Australasian Flyway. Species dependent on these wetlands include nearly the entire population of the Critically Endangered spoon-billed sandpiper Calidris pygmaea and the Endangered spotted greenshank Tringa guttifer, and significant populations of several other globally threatened and declining species. Presently, more than 50 coastal Important Bird and Biodiversity Areas (IBAs) in the region (7.4% of all South-east Asian IBAs) support at least one threatened migratory species. However, recent studies continue to reveal major knowledge gaps on the distribution of migratory waterbirds and important wetland sites along South-east Asia's vast coastline, including undiscovered and potential IBAs. Alongside this, there are critical gaps in the representation of coastal wetlands across the protected area networks of many countries in this region (e.g. Viet Nam, Indonesia, Malaysia), hindering effective conservation. Although a better understanding of the value of coastal wetlands to people and their importance to migratory species is necessary, governments and other stakeholders need to do more to strengthen the conservation of these ecosystems by improving protected area coverage, habitat restoration, and coastal governance and management. This must be underpinned by the judicious use of evidence-based approaches, including satellite-tracking of migratory birds, ecological research and ground surveys.
Current first-line treatments for paediatric depression demonstrate mild-to-moderate effectiveness. This has spurred a growing body of literature on lifestyle recommendations pertaining to nutrition, sleep and exercise for treating paediatric depression.
Aims
Paediatric depression clinical practice guidelines (CPGs) were reviewed for quality and to catalogue recommendations on nutrition, sleep and exercise made by higher-quality CPGs.
Method
Searches were conducted in Medline, EMBASE, PsycINFO, Web of Science and CINAHL, and grey literature CPGs databases for relevant CPGs. Eligible CPGs with a minimum or high-quality level, as determined by the Appraisal of Guidelines for Research and Evaluation, Second Edition instrument, were included if they were (a) paediatric; (b) CPGs, practice parameter or consensus or expert committee recommendations; (c) for depression; (d) the latest version and (e) lifestyle recommendations for nutrition, sleep or exercise. Key information extracted included author(s), language, year of publication, country, the institutional body issuing the CPG, target disorder, age group, lifestyle recommendation and the methods used to determine CPG lifestyle recommendations.
Results
Ten paediatric CPGs for depression with a minimum or high-quality level contained recommendations on nutrition, sleep or exercise. Lifestyle recommendations were predominately qualitative, with quantitative details only outlined in two CPGs for exercise. Most recommendations were brief general statements, with 50% lacking supporting evidence from the literature.
Conclusions
Interest in lifestyle interventions for treatment in child and youth depression is growing. However, current CPG lifestyle recommendations for nutrition, sleep or exercise are based on expert opinion rather than clinical trials.