To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Among 353 healthcare personnel in a longitudinal cohort in four hospitals in Atlanta, GA (May-June 2020), 23 (6.5%) had SARS-CoV-2 antibodies. Spending >50% of a typical shift at bedside (OR 3.4, 95% CI: 1.2–10.5) and Black race (OR 8.4, 95% CI: 2.7–27.4) were associated with SARS-CoV-2 seropositivity.
The concept of autonomy is essential in the practice and study of gerontology and in long-term care policies. For older adults with expanding care needs, scores from tightly specified assessment instruments, which aim to measure the autonomy of service users, usually determine access to social services. These instruments emphasise functional independence in the performance of activities of daily living. In an effort to broaden the understanding of autonomy into needs assessment practice, the province of Québec (Canada) added social and relational elements into the assessment tool. In the wake of these changes, this article studies the interaction between the use of assessment instruments and the extent to which they alter how older adults define their autonomy as service users. This matters since the conceptualisation of autonomy shapes the formulation of long-term care policy problems, influencing both the demand and supply of services and the types of services that ought to be prioritised by governments. Relying on focus groups, this study shows that the functional autonomy frame dominates problem definitions, while social/relational framings are marginal. This reflects the more authoritative weight of functional autonomy within the assessment tool and contributes to the biomedicalisation of aging.
In the recent past, the #MeToo movement has shaken India. A docket of high-flying names, from politicians to celebrities and journalists, have come under scrutiny for alleged sexual abuse of women. Flagged by a Bollywood actress, the #MeToo campaign in India ignited feminists, academicians, and policymakers to re-examine women’s continued abuse in all sections of society. Despite a stringent legal regime enforced after the Nirbhaya tragedy, the abuse of women continues unabated. Feminists opine that violence against women remains an ongoing concern that is heightened in the face of a waning criminal justice system that fails to address their plight. Lack of confidence in the system discourages women from approaching the authorities, something palpable in #MeToo allegations, where women preferred to remain silent in the face of inevitable backlash from society, lack of support and cooperation from police and prosecution and finally, courts, where the victim is positioned as the accused to respond to questions of how and why? This article examines the #MeToo movement against the rising crime graph’s backdrop and the criminal justice system’s consequent failure to respond to the same.
The movement of healthcare professionals (HCPs) induces an indirect contact network: touching a patient or the environment in one area, then again elsewhere, can spread healthcare-associated pathogens from 1 patient to another. Thus, understanding HCP movement is vital to calibrating mathematical models of healthcare-associated infections. Because long-term care facilities (LTCFs) are an important locus of transmission and have been understudied relative to hospitals, we developed a system for measuring contact patterns specifically within an LTCF. Methods: To measure HCP movement patterns, we used badges (credit-card–sized, programmable, battery-powered devices with wireless proximity sensors) worn by HCPs and placed in 30 locations for 3 days. Each badge broadcasts a brief message every 8 seconds. When received by other badges within range, the recipients recorded the time, source badge identifier, and signal strength. By fusing the data collected by all badges with a facility map, we estimated when and for how long each HCP was in any of the locations where instruments had been installed. Results: Combining the messages captured by all of our devices, we calculated the dwell time for each job type (eg, nurses, nursing assistants, physical therapists) in different locations (eg, resident rooms, dining areas, nurses stations, hallways, etc). Although dwell times over all job and area types averaged ∼100 seconds, the standard deviation was large (115 seconds), with a mean of maximums by job type of ∼450 seconds. For example, nursing assistants spent substantially more time in resident rooms and transitioned across rooms at a much higher rate. Overall, each distribution exhibits a power-law–like characteristic. By aggregating the data from devices with location data extracted from the floor plan, we were able to produce an explicit trace for each individual (identified only by job type) for each day and to compute cross-table transition probabilities by area for each job type. Conclusions: We developed a portable system for measuring contact patterns in long-term care settings. Our results confirm that frequent interactions between HCPs and LTC residents occur, but they are not uniform across job types or resident locations. The data produced by our system can be used to better calibrate mathematical models of pathogen spread in LTCs. Moreover, our system can be easily and quickly deployed to any healthcare settings to similarly inform outbreak investigations.
Disclosures: Scott Fridkin reports that his spouse receives a consulting fee from the vaccine industry.
Background: Certain nursing home (NH) resident care tasks have a higher risk for multidrug-resistant organisms (MDRO) transfer to healthcare personnel (HCP), which can result in transmission to residents if HCPs fail to perform recommended infection prevention practices. However, data on HCP-resident interactions are limited and do not account for intrafacility practice variation. Understanding differences in interactions, by HCP role and unit, is important for informing MDRO prevention strategies in NHs. Methods: In 2019, we conducted serial intercept interviews; each HCP was interviewed 6–7 times for the duration of a unit’s dayshift at 20 NHs in 7 states. The next day, staff on a second unit within the facility were interviewed during the dayshift. HCP on 38 units were interviewed to identify healthcare personnel (HCP)–resident care patterns. All unit staff were eligible for interviews, including certified nursing assistants (CNAs), nurses, physical or occupational therapists, physicians, midlevel practitioners, and respiratory therapists. HCP were asked to list which residents they had cared for (within resident rooms or common areas) since the prior interview. Respondents selected from 14 care tasks. We classified units into 1 of 4 types: long-term, mixed, short stay or rehabilitation, or ventilator or skilled nursing. Interactions were classified based on the risk of HCP contamination after task performance. We compared proportions of interactions associated with each HCP role and performed clustered linear regression to determine the effect of unit type and HCP role on the number of unique task types performed per interaction. Results: Intercept-interviews described 7,050 interactions and 13,843 care tasks. Except in ventilator or skilled nursing units, CNAs have the greatest proportion of care interactions (interfacility range, 50%–60%) (Fig. 1). In ventilator and skilled nursing units, interactions are evenly shared between CNAs and nurses (43% and 47%, respectively). On average, CNAs in ventilator and skilled nursing units perform the most unique task types (2.5 task types per interaction, Fig. 2) compared to other unit types (P < .05). Compared to CNAs, most other HCP types had significantly fewer task types (0.6–1.4 task types per interaction, P < .001). Across all facilities, 45.6% of interactions included tasks that were higher-risk for HCP contamination (eg, transferring, wound and device care, Fig. 3). Conclusions: Focusing infection prevention education efforts on CNAs may be most efficient for preventing MDRO transmission within NH because CNAs have the most HCP–resident interactions and complete more tasks per visit. Studies of HCP-resident interactions are critical to improving understanding of transmission mechanisms as well as target MDRO prevention interventions.
Funding: Centers for Disease Control and Prevention (grant no. U01CK000555-01-00)
Disclosures: Scott Fridkin, consulting fee, vaccine industry (spouse)
Background: The NHSN methods for central-line–associated bloodstream infection (CLABSI) surveillance do not account for additive CLABSI risk of concurrent central lines. Past studies were small and modestly risk adjusted but quantified the risk to be ~2-fold. If the attributable risk is this high, facilities that serve high-acuity patients with medically indicated concurrent central-line use may disproportionally incur CMS payment penalties for having high CLABSI rates. We aimed to build evidence through analysis using improved risk adjustment of a multihospital CLABSI experience to influence NHSN CLABSI protocols to account for risks attributed to concurrent central lines. Methods: In a retrospective cohort of adult patients at 4 hospitals (range, 110–733 beds) from 2012 to 2017, we linked central-line data to patient encounter data (age, comorbidities, total parenteral nutrition, chemotherapy, CLABSI). Analysis was limited to patients with >2 central-line days, with either a single central line or concurrence of no more than 2 central lines where insertion and removal dates overlapped by >1 day. Propensity-score matching for likelihood of concurrence and conditional logistic regression modeling estimated the risk of CLABSI attributed to concurrence of >1 day. To evaluate in Cox proportional hazards regression of time to CLABSIs, we also analyzed patients as unique central-line episodes: low risk (ie, ports, dialysis central lines, or PICC) or high risk (ie, temporary or nontunneled) and single versus concurrent. Results: In total, 64,575 central lines were used in 50,254 encounters. Among these patients, 517 developed a CLABSI; 438 (85%) with a single central line and 74 (15%) with concurrence. Moreover, 4,657 (9%) patients had concurrence (range, 6%–14% by hospital); of these, 74 (2%) had CLABSI, compared to 71 of 7,864 propensity-matched controls (1%). Concurrence patients had a median of 17 NHSN central-line days and 21 total central-line days. In multivariate modeling, patients with more concurrence (>2 of 3 of concurrent central-line days) had an higher risk for CLABSI (adjusted risk ratio, 1.62; 95% CI, 1.1–2.3) compared to controls. In survival analysis, 14,610 concurrent central-line episodes were compared to 31,126 single low-risk central-line episodes; adjusting for comorbidity, total parenteral nutrition, and chemotherapy, the daily excess risk of CLABSI attributable to the concurrent central line was ~80% (hazard ratio 1.78 for 2 high-risk or 2 low-risk central lines; hazard ratio 1.80 for a mix of high- and low-risk central lines) (Fig. 1). Notably, the hazard ratio attributed to a single high-risk line compared to a low-risk line was 1.44 (95% CI, 1.13–1.84). Conclusions: Since a concurrent central line nearly doubles the risk for CLABSI compared to a single low-risk line, the CDC should modify NHSN methodology to better account for this risk.
Disclosures: Scott Fridkin reports that his spouse receives consulting fees from the vaccine industry.
Background: Although antibiotic stewardship programs (ASP) are now required in nursing homes, assimilating and responding to data to improve prescribing in nursing homes is novel. Four Atlanta-based skilled nursing facilities (SNFs) began collaborating (EASIL: Emory Antibiotic Stewardship in Long-Term Care) to share standardized prescribing data to allow interfacility comparisons and action. Methods: After SNF ASPs were evaluated and trained, standardized prescribing logs were submitted (January 2019 to June 2019) including the following data: start date, treatment site, prescriber attribution of order (ie, SNF order vs hospital order) and monthly resident days. SNF-specific point estimates of usage rates were calculated as pooled means for all antibiotic starts, SNF-order starts, and days of therapy (DOT), by treatment site per 1,000 resident days. Duration of urinary tract infection (UTI) therapy was assessed by calculating percentage of SNF-UTI starts over recommended duration defined by the local treatment guideline. Rate ratios (RRs) of use were calculated to compare SNF-specific rates to the largest SNF. The 95% CIs were calculated using normal approximation. Results: Monthly starts ranged from 124 to 177, with a pooled mean of 7.8 antibiotic starts (any type), 4.5 SNF-order starts, and 1.2 SNF-UTI starts per 1,000 resident days. Approximately half of all starts were SNF starts (range, 43%–53%), and less than half of DOT were attributed to SNF starts (range, 35%–45%). Overall, SNF-order treatment sites were most often UTIs (29%), lower respiratory infections (17%), and skin and soft-tissue infections (17%). SNF-order UTI starts per 1,000 patient days varied at 1 SNF (SNF B RR, 1.57; 95% CI, 1.04–2.36). SNF-order UTI DOT per 1,000 patient days was more variable, with SNFs B and C having significantly higher rates (B RR, 1.49, 1.24, and 1.82; C RR, 5.42; 95% CI, 4.65–6.34) than SNF A (Fig. 1). The percentage of SNF-order UTI starts that were over recommended duration ranged from 8% (nitrofurantoin, SNF A) to 100% (fluoroquinolones, SNF C) (Fig. 1). Conclusions: Although UTIs are the single most common reason to prescribe antibiotics after arriving in a SNF, they account for a small fraction of overall starts and an even smaller fraction of DOT. We identified outlier prescribing by different SNFs using 3 metrics, suggesting that distinct corrective actions are necessary to target distinct prescribing challenges (starts, duration, and transitions of care).
Disclosures: Scott Fridkin reports that his spouse receives consulting fees from the vaccine industry.
Background: Current NHSN denominator reporting for central-line–associated bloodstream infection (CLABSI) counts each patient day with n central lines as 1 central-line day. The NHSN does not directly adjust for potential increased risk of CLABSI from concurrent central lines, but the current NHSN standardized infection ratio (SIR) methods may account for differences in concurrence by adjusting for location type. Objective: We examined differences in central-line concurrence by NHSN location type among CLABSI patients. Methods: In a retrospective cohort of adults with CLABSI at 4 hospitals from 2012 to 2017, we linked central-line data to encounter and CLABSI data. Central lines were considered concurrent if they overlapped for >1 day. We calculated proportion of patients with concurrence at both NHSN location and SIR group levels; risk ratios for concurrence between NHSN location types within each SIR group (ie,, locations defined by SIR models as equal “risk”) were determined. Results: In total, 930 CLABIs were identified from 19 NHSN-defined locations that map to 7 SIR groups. Most CLABSIs occurred in locations mapped to either of 2 SIR groups: wards (227, 16% concurrence) and ICUs (294, 33% concurrence). The ward group had 3 NHSN locations (median, 78 CLABSIs) with concurrence range 8% (medical-surgical ward) to 20% (surgical ward). The ICU group had 6 NHSN locations (median, 47.5 CLABSIs) and concurrence ranged from 20% (neurosurgical ICU) to 39% (medical ICU). Despite the noted variations, no risk ratio was statistically different within each SIR group (Table 1). Conclusions: In patients with CLABSIs, the frequency of concurrence varied up to 2-fold between location types within the current NHSN SIR groups, though not statistically significantly. Assessing whether this difference in magnitude persists in all patients with central lines is an important next step in refining risk adjustment methods to account for concurrent central-line use.
Disclosures: Scott Fridkin reports that his spouse receives consulting fees from the vaccine industry.
The catastrophic declines of three species of ‘Critically Endangered’ Gyps vultures in South Asia were caused by unintentional poisoning by the non-steroidal anti-inflammatory drug (NSAID) diclofenac. Despite a ban on its veterinary use in 2006 (India, Nepal, Pakistan) and 2010 (Bangladesh), residues of diclofenac have continued to be found in cattle carcasses and in dead wild vultures. Another NSAID, meloxicam, has been shown to be safe to vultures. From 2012 to 2018, we undertook covert surveys of pharmacies in India, Nepal and Bangladesh to investigate the availability and prevalence of NSAIDs for the treatment of livestock. The purpose of the study was to establish whether diclofenac continued to be sold for veterinary use, whether the availability of meloxicam had increased and to determine which other veterinary NSAIDs were available. The availability of diclofenac declined in all three countries, virtually disappearing from pharmacies in Nepal and Bangladesh, highlighting the advances made in these two countries to reduce this threat to vultures. In India, diclofenac still accounted for 10–46% of all NSAIDs offered for sale for livestock treatment in 2017, suggesting weak enforcement of existing regulations and a continued high risk to vultures. Availability of meloxicam increased in all countries and was the most common veterinary NSAID in Nepal (89.9% in 2017). Although the most widely available NSAID in India in 2017, meloxicam accounted for only 32% of products offered for sale. In Bangladesh, meloxicam was less commonly available than the vulture-toxic NSAID ketoprofen (28% and 66%, respectively, in 2018), despite the partial government ban on ketoprofen in 2016. Eleven different NSAIDs were recorded, several of which are known or suspected to be toxic to vultures. Conservation priorities should include awareness raising, stricter implementation of current bans, bans on other vulture-toxic veterinary NSAIDs, especially aceclofenac and nimesulide, and safety-testing of other NSAIDs on Gyps vultures to identify safe and toxic drugs.
Introduction: Kussmaul's sign, the absence of a drop in JVP or a paradoxical increase in JVP on inspiration, can be elicited clinically as an indicator of right ventricular myocardial infarction (RVMI). RVMI poses unique diagnostic and management challenges. It complicates 30-50% of inferior MI and is associated with increased mortality when compared to inferior MI without RV involvement. Early recognition allows maintenance of preload by avoiding use of nitroglycerin, diuretic and narcotic medication, and treatment with fluids and vasopressors. We reviewed the evidence for Kussmaul's sign for diagnosis of RVMI. Methods: We conducted a librarian assisted search using PubMed, Medline, Embase, the Cochrane database, relevant conference abstracts from 1965 to October 2019. No restrictions for language or study type were imposed. All studies with patients presenting with acute myocardial infarction were reviewed. Two independent reviewers extracted data from relevant studies. Studies were combined when similar study populations were present. Study quality was assessed using the QUADAS-2 tool. Random effects meta-analysis was performed using metaprop in Stata for the 3 reference standards combined. Subset analysis for each of the 3 reference standards was completed. Results: We identified 122 studies: 10 were selected for full text review. Eight studies had comparable populations with a total of 469 consecutive patients admitted to the coronary care unit with acute inferior myocardial infarction and were included in the analysis. Prevalence of RVMI was 36% (CI 95% 31.8–40.5). References standards for the diagnosis of RVMI included echocardiography, 16 lead ECG and haemodynamic studies. A gold standard for diagnosis of RVMI is lacking and thus the reference standards were combined. Kussmaul's sign had a sensitivity of 69.3% (CI 95% 46.3 - 85.5, I2- 86.7%), specificity of 95.1% (CI 95% 75.6 - 99.2, I2- 89.3%) and LR + 14.1 (CI 95% 2.6-73.2). Subset analysis of echocardiography, ECG and haemodynamic studies revealed sensitivity of 45%, 77% and 82% (I2- 62%, N/A, 70%) respectively and specificity of 92%, 84% and 92% (I2- 86%, N/A, 86%). Conclusion: Kussmaul's sign is specific for acute right ventricular myocardial infarction and may serve as an important clinical sign of right ventricular dysfunction requiring preload preserving management.
Past research has demonstrated the high prevalence of depression in elderly. However, the most of studies followed the symptom trajectory of individuals diagnosed with depression in a clinical setting and few longitudinal studies have characterized the patterns of depression in older adults population-based.
To describe changing of depressive disorder in an elderly population-based over a 12-month period and to examine the influence of medical and psychosocial factors on the outcome.
Data come from a longitudinal ESA Study (Enquête sur la Santé des Aînés) of elderly community persons (n = 2752). Depression, including major and minor depression, measured using the DSM-IV criteria. Generalized estimating equations (GEE) were used to assess relations between participant characteristics at baseline and depression, 12 months later.
Among the 164 (5.9%) participants, who are depressed at baseline, 19.5% were continuously ill cases and 80.4% had recovered, 12 months later. Multivariate analyses showed that the risk of depression over the 12-month period was higher among for participants who were separated; living in rural region; with a great number of daily hassles, with high level of stress intensity, great number of chronic disease and with fair/poor perception of mental health.
Results support the hypothesis about medical and psychosocial factors as predictors over time of depression, in old persons. Using readily available prognostic factors (for example, high level of stress intensity, living in rural region, great number of chronic disease) could help direct treatment to elderly at highest risk of a poor prognosis.
To describe barriers and facilitators to the adoption of recommended infection prevention and control (IPC) practices among healthcare workers (HCWs).
A qualitative research design was used. Individual semistructured interviews with HCWs and observations of clinical practices were conducted from February to May 2018 in 8 care units of 2 large tertiary-care hospitals in Montreal (Québec, Canada).
We interviewed 13 managers, 4 nurses, 2 physicians, 3 housekeepers, and 2 medical laboratory technologists. We conducted 7 observations by following IPC nurses (n = 3), nurses (n = 2), or patient attendants (n = 2) in their work routines. Barriers to IPC adoption were related to the context of care, workplace environment issues, and communication issues. The main facilitator of the IPC adoption by HCWs was the “development of an IPC culture or safety culture.” The “IPC culture” relied upon leadership support by managers committed to IPC, shared belief in the importance of IPC measures to limit healthcare-associated infections (HAIs), collaboration and good communication among staff, as well as proactivity and ownership of IPC measures (ie, development of local solutions to reduce HAIs and “working together” toward common goals).
Adoption of recommended IPC measures by HCWs is strongly influenced by the “IPC culture.” The IPC culture was not uniform within hospital and differences in IPC culture were identified between care units.
Refractory ventricular fibrillation encountered during cardiac arrest has a mortality rate of 97%.1 As per the advanced cardiac life support (ACLS) guidelines, the management algorithm of ventricular fibrillation consists of chest compressions, epinephrine, defibrillation, and anti-arrhythmics.2 There have been reports describing the use of the fast-acting selective β-blocker, esmolol, and dual-sequential defibrillation in the management of ventricular fibrillation that is refractory to standard ACLS. We present a case of a 24-year-old male who had an out-of-hospital cardiac arrest, with refractory ventricular fibrillation despite high-quality cardiopulmonary resuscitation (CPR) and ACLS management. Along with standard ACLS, triple-sequential defibrillation was used to achieve return of spontaneous circulation (ROSC) after 82 minutes of downtime. An electrocardiogram (ECG) after ROSC showed an ST-elevation myocardial infarction (MI), and the patient underwent angiography showing a 100% occlusion of his left anterior descending artery. Following management of his coronary artery disease, he was discharged from the hospital 16 days later and was neurologically intact.
Bilingual infants vary in when, how, and how often they hear each of their languages. Variables such as the particular languages of exposure, the community context, the onset of exposure, the amount of exposure, and socioeconomic status are crucial for describing any bilingual infant sample. Parent report is an effective approach for gathering data about infants’ language experience. However, its quality is highly dependent on how information is elicited. This paper introduces a Multilingual Approach to Parent Language Estimates (MAPLE). MAPLE promotes best practices for using structured interviews to reliably elicit information from parents on bilingual infants’ language background, with an emphasis on the challenging task of quantifying infants’ relative exposure to each language. We discuss sensitive issues that must be navigated in this process, including diversity in family characteristics and cultural values. Finally, we identify six systematic effects that can impact parent report, and strategies for minimizing their influence.
Medieval Montpellier occupied an aquatic setting, which gave rise to numerous sanitary and environmental problems. Summer storms caused heavy floods; drains became blocked, filling the streets with filth; and the ditches that encircled the city often overran with stagnant water. Magistrates had to ensure that there was an adequate supply of uncontaminated water for domestic and industrial use, while keeping the hydraulic infrastructure in working order. They had also to maintain the river that conveyed merchandise to the town centre, provide for the effective disposal of dirty water, and guard against pollution. Using Montpellier's rich civic archive, this chapter examines the strategies and regulations developed by the authorities in order to minimise the health risks arising from these issues.
Key words: Montpellier; public health; waterworks; sanitation; floods
The study of premodern water management and the development of water supplies is enjoying something of a renaissance, especially in England and Italy. Recent research tends to demonstrate that even small towns were not devoid of rational sanitary provisions, especially when it came to the supply of uncontaminated water, a valuable resource in the Middle Ages. As in most medieval urban communities, water in Montpellier gave rise to multiple sanitary and environmental concerns. The vagaries of the weather frequently caused floods. Intra-muros, sewers were subject to overflows and filled the streets with mud and filth. The city council had to cope with numerous problems related to water, including the provision of a reliable supply of fresh water by means of fountains, and other hydraulic infrastructures, as well as river maintenance, the upkeep of sewers, and the management of wastewater and water pollution. Using Montpellier's rich medieval archives, this chapter will examine the various means by which the council attempted to police the urban environment in order to cope with health risks caused by water-related issues.
Montpellier stands on a group of hills which constitute a link between the low ridge of the Cevennes Mountains and the coastline, being furnished with a number of ponds. These three sand hills provided a suitable site for the establishment of a human settlement.
Age at sexual debut is known to have implications for future sexual behaviours and health outcomes, including HIV infection, early pregnancy and maternal mortality, but may also influence educational outcomes. Longitudinal data on schooling and sexual behaviour from a demographic surveillance site in Karonga district, northern Malawi, were analysed for 3153 respondents between the ages of 12 and 25 years to examine the association between sexual debut and primary school dropout, and the role of prior school performance. Time to dropout was modelled using the Fine and Gray survival model to account for the competing event of primary school completion. To deal with the time-varying nature of age at sexual debut and school performance, models were fitted using landmark analyses. Sexual debut was found to be associated with a five-fold increase in rate of subsequent dropout for girls and a two-fold increase in dropout rate for boys (adjusted hazard ratio [aHR] of 5.27, CI 4.22–6.57, and 2.19, CI 1.77–2.7, respectively). For girls who were sexually active by age 16, only 16% ultimately completed primary schooling, compared with 70% aged 18 or older at sexual debut. Prior to sexual debut, girls had primary completion levels similar to those of boys. The association between sexual debut and school dropout could not be explained by prior poor school performance: the effect of sexual debut on dropout was as strong among those who were not behind in school as among those who were overage for their school grade. Girls who were sexually active were more likely to repeat a grade, with no effect being seen for boys. Pathways to dropout are complex and may differ for boys and girls. Interventions are needed to improve school progression so children complete primary school before sexual debut, and to improve sex education and contraception provision.