We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In response to the COVID-19 pandemic, we rapidly implemented a plasma coordination center, within two months, to support transfusion for two outpatient randomized controlled trials. The center design was based on an investigational drug services model and a Food and Drug Administration-compliant database to manage blood product inventory and trial safety.
Methods:
A core investigational team adapted a cloud-based platform to randomize patient assignments and track inventory distribution of control plasma and high-titer COVID-19 convalescent plasma of different blood groups from 29 donor collection centers directly to blood banks serving 26 transfusion sites.
Results:
We performed 1,351 transfusions in 16 months. The transparency of the digital inventory at each site was critical to facilitate qualification, randomization, and overnight shipments of blood group-compatible plasma for transfusions into trial participants. While inventory challenges were heightened with COVID-19 convalescent plasma, the cloud-based system, and the flexible approach of the plasma coordination center staff across the blood bank network enabled decentralized procurement and distribution of investigational products to maintain inventory thresholds and overcome local supply chain restraints at the sites.
Conclusion:
The rapid creation of a plasma coordination center for outpatient transfusions is infrequent in the academic setting. Distributing more than 3,100 plasma units to blood banks charged with managing investigational inventory across the U.S. in a decentralized manner posed operational and regulatory challenges while providing opportunities for the plasma coordination center to contribute to research of global importance. This program can serve as a template in subsequent public health emergencies.
OBJECTIVES/GOALS: Contingency management (CM) procedures yield measurable reductions in cocaine use. This poster describes a trial aimed at using CM as a vehicle to show the biopsychosocial health benefits of reduced use, rather than total abstinence, the currently accepted metric for treatment efficacy. METHODS/STUDY POPULATION: In this 12-week, randomized controlled trial, CM was used to reduce cocaine use and evaluate associated improvements in cardiovascular, immune, and psychosocial well-being. Adults aged 18 and older who sought treatment for cocaine use (N=127) were randomized into three groups in a 1:1:1 ratio: High Value ($55) or Low Value ($13) CM incentives for cocaine-negative urine samples or a non-contingent control group. They completed outpatient sessions three days per week across the 12-week intervention period, totaling 36 clinic visits and four post-treatment follow-up visits. During each visit, participants provided observed urine samples and completed several assays of biopsychosocial health. RESULTS/ANTICIPATED RESULTS: Preliminary findings from generalized linear mixed effect modeling demonstrate the feasibility of the CM platform. Abstinence rates from cocaine use were significantly greater in the High Value group (47% negative; OR = 2.80; p = 0.01) relative to the Low Value (23% negative) and Control groups (24% negative;). In the planned primary analysis, the level of cocaine use reduction based on cocaine-negative urine samples will serve as the primary predictor of cardiovascular (e.g., endothelin-1 levels), immune (e.g., IL-10 levels) and psychosocial (e.g., Addiction Severity Index) outcomes using results from the fitted models. DISCUSSION/SIGNIFICANCE: This research will advance the field by prospectively and comprehensively demonstrating the beneficial effects of reduced cocaine use. These outcomes can, in turn, support the adoption of reduced cocaine use as a viable alternative endpoint in cocaine treatment trials.
Cohort studies demonstrate that people who later develop schizophrenia, on average, present with mild cognitive deficits in childhood and endure a decline in adolescence and adulthood. Yet, tremendous heterogeneity exists during the course of psychotic disorders, including the prodromal period. Individuals identified to be in this period (known as CHR-P) are at heightened risk for developing psychosis (~35%) and begin to exhibit cognitive deficits. Cognitive impairments in CHR-P (as a singular group) appear to be relatively stable or ameliorate over time. A sizeable proportion has been described to decline on measures related to processing speed or verbal learning. The purpose of this analysis is to use data-driven approaches to identify latent subgroups among CHR-P based on cognitive trajectories. This will yield a clearer understanding of the timing and presentation of both general and domain-specific deficits.
Participants and Methods:
Participants included 684 young people at CHR-P (ages 12–35) from the second cohort of the North American Prodromal Longitudinal Study. Performance on the MATRICS Consensus Cognitive Battery (MCCB) and the Wechsler Abbreviated Scale of Intelligence (WASI-I) was assessed at baseline, 12-, and 24-months. Tested MCCB domains include verbal learning, speed of processing, working memory, and reasoning & problem-solving. Sex- and age-based norms were utilized. The Oral Reading subtest on the Wide Range Achievement Test (WRAT4) indexed pre-morbid IQ at baseline. Latent class mixture models were used to identify distinct trajectories of cognitive performance across two years. One- to 5-class solutions were compared to decide the best solution. This determination depended on goodness-of-fit metrics, interpretability of latent trajectories, and proportion of subgroup membership (>5%).
Results:
A one-class solution was found for WASI-I Full-Scale IQ, as people at CHR-P predominantly demonstrated an average IQ that increased gradually over time. For individual domains, one-class solutions also best fit the trajectories for speed of processing, verbal learning, and working memory domains. Two distinct subgroups were identified on one of the executive functioning domains, reasoning and problem-solving (NAB Mazes). The sample divided into unimpaired performance with mild improvement over time (Class I, 74%) and persistent performance two standard deviations below average (Class II, 26%). Between these classes, no significant differences were found for biological sex, age, years of education, or likelihood of conversion to psychosis (OR = 1.68, 95% CI 0.86 to 3.14). Individuals assigned to Class II did demonstrate a lower WASI-I IQ at baseline (96.3 vs. 106.3) and a lower premorbid IQ (100.8 vs. 106.2).
Conclusions:
Youth at CHR-P demonstrate relatively homogeneous trajectories across time in terms of general cognition and most individual domains. In contrast, two distinct subgroups were observed with higher cognitive skills involving planning and foresight, and they notably exist independent of conversion outcome. Overall, these findings replicate and extend results from a recently published latent class analysis that examined 12-month trajectories among CHR-P using a different cognitive battery (Allott et al., 2022). Findings inform which individuals at CHR-P may be most likely to benefit from cognitive remediation and can inform about the substrates of deficits by establishing meaningful subtypes.
Clinical implementation of risk calculator models in the clinical high-risk for psychosis (CHR-P) population has been hindered by heterogeneous risk distributions across study cohorts which could be attributed to pre-ascertainment illness progression. To examine this, we tested whether the duration of attenuated psychotic symptom (APS) worsening prior to baseline moderated performance of the North American prodrome longitudinal study 2 (NAPLS2) risk calculator. We also examined whether rates of cortical thinning, another marker of illness progression, bolstered clinical prediction models.
Methods
Participants from both the NAPLS2 and NAPLS3 samples were classified as either ‘long’ or ‘short’ symptom duration based on time since APS increase prior to baseline. The NAPLS2 risk calculator model was applied to each of these groups. In a subset of NAPLS3 participants who completed follow-up magnetic resonance imaging scans, change in cortical thickness was combined with the individual risk score to predict conversion to psychosis.
Results
The risk calculator models achieved similar performance across the combined NAPLS2/NAPLS3 sample [area under the curve (AUC) = 0.69], the long duration group (AUC = 0.71), and the short duration group (AUC = 0.71). The shorter duration group was younger and had higher baseline APS than the longer duration group. The addition of cortical thinning improved the prediction of conversion significantly for the short duration group (AUC = 0.84), with a moderate improvement in prediction for the longer duration group (AUC = 0.78).
Conclusions
These results suggest that early illness progression differs among CHR-P patients, is detectable with both clinical and neuroimaging measures, and could play an essential role in the prediction of clinical outcomes.
The U.S. Department of Agriculture–Agricultural Research Service (USDA-ARS) has been a leader in weed science research covering topics ranging from the development and use of integrated weed management (IWM) tactics to basic mechanistic studies, including biotic resistance of desirable plant communities and herbicide resistance. ARS weed scientists have worked in agricultural and natural ecosystems, including agronomic and horticultural crops, pastures, forests, wild lands, aquatic habitats, wetlands, and riparian areas. Through strong partnerships with academia, state agencies, private industry, and numerous federal programs, ARS weed scientists have made contributions to discoveries in the newest fields of robotics and genetics, as well as the traditional and fundamental subjects of weed–crop competition and physiology and integration of weed control tactics and practices. Weed science at ARS is often overshadowed by other research topics; thus, few are aware of the long history of ARS weed science and its important contributions. This review is the result of a symposium held at the Weed Science Society of America’s 62nd Annual Meeting in 2022 that included 10 separate presentations in a virtual Weed Science Webinar Series. The overarching themes of management tactics (IWM, biological control, and automation), basic mechanisms (competition, invasive plant genetics, and herbicide resistance), and ecosystem impacts (invasive plant spread, climate change, conservation, and restoration) represent core ARS weed science research that is dynamic and efficacious and has been a significant component of the agency’s national and international efforts. This review highlights current studies and future directions that exemplify the science and collaborative relationships both within and outside ARS. Given the constraints of weeds and invasive plants on all aspects of food, feed, and fiber systems, there is an acknowledged need to face new challenges, including agriculture and natural resources sustainability, economic resilience and reliability, and societal health and well-being.
To provide comprehensive population-level estimates of the burden of healthcare-associated influenza.
Design:
Retrospective cross-sectional study.
Setting:
US Influenza Hospitalization Surveillance Network (FluSurv-NET) during 2012–2013 through 2018–2019 influenza seasons.
Patients:
Laboratory-confirmed influenza-related hospitalizations in an 8-county catchment area in Tennessee.
Methods:
The incidence of healthcare-associated influenza was determined using the traditional definition (ie, positive influenza test after hospital day 3) in addition to often underrecognized cases associated with recent post-acute care facility admission or a recent acute care hospitalization for a noninfluenza illness in the preceding 7 days.
Results:
Among the 5,904 laboratory-confirmed influenza-related hospitalizations, 147 (2.5%) had traditionally defined healthcare-associated influenza. When we included patients with a positive influenza test obtained in the first 3 days of hospitalization and who were either transferred to the hospital directly from a post-acute care facility or who were recently discharged from an acute care facility for a noninfluenza illness in the preceding 7 days, we identified an additional 1,031 cases (17.5% of all influenza-related hospitalizations).
Conclusions:
Including influenza cases associated with preadmission healthcare exposures with traditionally defined cases resulted in an 8-fold higher incidence of healthcare-associated influenza. These results emphasize the importance of capturing other healthcare exposures that may serve as the initial site of viral transmission to provide more comprehensive estimates of the burden of healthcare-associated influenza and to inform improved infection prevention strategies.
This paper used data from the Apathy in Dementia Methylphenidate Trial 2 (NCT02346201) to conduct a planned cost consequence analysis to investigate whether treatment of apathy with methylphenidate is economically attractive.
Methods:
A total of 167 patients with clinically significant apathy randomized to either methylphenidate or placebo were included. The Resource Utilization in Dementia Lite instrument assessed resource utilization for the past 30 days and the EuroQol five dimension five level questionnaire assessed health utility at baseline, 3 months, and 6 months. Resources were converted to costs using standard sources and reported in 2021 USD. A repeated measures analysis of variance compared change in costs and utility over time between the treatment and placebo groups. A binary logistic regression was used to assess cost predictors.
Results:
Costs were not significantly different between groups whether the cost of methylphenidate was excluded (F(2,330) = 0.626, ηp2 = 0.004, p = 0.535) or included (F(2,330) = 0.629, ηp2 = 0.004, p = 0.534). Utility improved with methylphenidate treatment as there was a group by time interaction (F(2,330) = 7.525, ηp2 = 0.044, p < 0.001).
Discussion:
Results from this study indicated that there was no evidence for a difference in resource utilization costs between methylphenidate and placebo treatment. However, utility improved significantly over the 6-month follow-up period. These results can aid in decision-making to improve quality of life in patients with Alzheimer’s disease while considering the burden on the healthcare system.
Early in the COVID-19 pandemic, the World Health Organization stressed the importance of daily clinical assessments of infected patients, yet current approaches frequently consider cross-sectional timepoints, cumulative summary measures, or time-to-event analyses. Statistical methods are available that make use of the rich information content of longitudinal assessments. We demonstrate the use of a multistate transition model to assess the dynamic nature of COVID-19-associated critical illness using daily evaluations of COVID-19 patients from 9 academic hospitals. We describe the accessibility and utility of methods that consider the clinical trajectory of critically ill COVID-19 patients.
To determine the incidence of severe acute respiratory coronavirus virus 2 (SARS-CoV-2) infection among healthcare personnel (HCP) and to assess occupational risks for SARS-CoV-2 infection.
Design:
Prospective cohort of healthcare personnel (HCP) followed for 6 months from May through December 2020.
Setting:
Large academic healthcare system including 4 hospitals and affiliated clinics in Atlanta, Georgia.
Participants:
HCP, including those with and without direct patient-care activities, working during the coronavirus disease 2019 (COVID-19) pandemic.
Methods:
Incident SARS-CoV-2 infections were determined through serologic testing for SARS-CoV-2 IgG at enrollment, at 3 months, and at 6 months. HCP completed monthly surveys regarding occupational activities. Multivariable logistic regression was used to identify occupational factors that increased the risk of SARS-CoV-2 infection.
Results:
Of the 304 evaluable HCP that were seronegative at enrollment, 26 (9%) seroconverted for SARS-CoV-2 IgG by 6 months. Overall, 219 participants (73%) self-identified as White race, 119 (40%) were nurses, and 121 (40%) worked on inpatient medical-surgical floors. In a multivariable analysis, HCP who identified as Black race were more likely to seroconvert than HCP who identified as White (odds ratio, 4.5; 95% confidence interval, 1.3–14.2). Increased risk for SARS-CoV-2 infection was not identified for any occupational activity, including spending >50% of a typical shift at a patient’s bedside, working in a COVID-19 unit, or performing or being present for aerosol-generating procedures (AGPs).
Conclusions:
In our study cohort of HCP working in an academic healthcare system, <10% had evidence of SARS-CoV-2 infection over 6 months. No specific occupational activities were identified as increasing risk for SARS-CoV-2 infection.
Underrepresentation of Black biomedical researchers demonstrates continued racial inequity and lack of diversity in the field. The Black Voices in Research curriculum was designed to provide effective instructional materials that showcase inclusive excellence, facilitate the dialog about diversity and inclusion in biomedical research, enhance critical thinking and reflection, integrate diverse visions and worldviews, and ignite action. Instructional materials consist of short videos and discussion prompts featuring Black biomedical research faculty and professionals. Pilot evaluation of instructional content showed that individual stories promoted information relevance, increased knowledge, and created behavioral intention to promote diversity and inclusive excellence in biomedical research.
Background: Despite significant morbidity and mortality, estimates of the burden of healthcare-associated viral respiratory infections (HA-VRI) for noninfluenza infections are limited. Of the studies assessing the burden of respiratory syncytial virus (RSV), cases are typically classified as healthcare associated if a positive test result occurred after the first 3 days following admission, which may miss healthcare exposures prior to admission. Utilizing an expanded definition of healthcare-associated RSV, we assessed the estimates of disease prevalence. Methods: This study included laboratory-confirmed cases of RSV in adult and pediatric patients admitted to acute-care hospitals in a catchment area of 8 counties in Tennessee identified between October 1, 2016, and April 30, 2019. Surveillance information was abstracted from hospital and state laboratory databases, hospital infection control databases, reportable condition databases, and electronic health records as a part of the Influenza Hospitalization Surveillance Network by the Emerging Infections Program. Cases were defined as healthcare-associated RSV if laboratory confirmation of infection occurred (1) on or after hospital day 4 (ie, “traditional definition”) or (2) between hospital day 0 and 3 in patients transferred from a chronic care facility or with a recent discharge from another acute-care facility in the 7 days preceding the current index admission (ie, “enhanced definition”). The proportion of laboratory-confirmed RSV designated as HA-VRI using both the traditional definition as well as with the added enhanced definition were compared. Results: We identified 900 cases of RSV in hospitalized patients over the study period. Using the traditional definition for HA-VRI, only 41 (4.6%) were deemed healthcare associated. Adding the cases identified using the enhanced definition, an additional 12 cases (1.3%) were noted in patients transferred from a chronic care facility for the current acute-care admission and 17 cases (1.9%) were noted in patients with a prior acute-care admission in the preceding 7 days. Using our expanded definition, the total proportion of healthcare-associated RSV in this cohort was 69 (7.7%) of 900 compared to 13.1% of cases for influenza (Figure 1). Although the burden of HA-VRI due to RSV was less than that of influenza, when stratified by age, the rate increased to 11.7% for those aged 50–64 years and to 10.1% for those aged ≥65 years (Figure 2). Conclusions: RSV infections are often not included in estimates of HA-VRI, but the proportion of cases that are healthcare associated are substantial. Typical surveillance methods likely underestimate the burden of disease related to RSV, especially for those aged ≥50 years.
Background: Healthcare-associated transmission of influenza leads to significant morbidity, mortality, and cost. Most studies classify healthcare-associated viral respiratory infections (HA-VRI) as those with a positive test result after the first 3 days following admission, which does not account for healthcare exposures prior to admission. Utilizing an expanded definition of healthcare-associated influenza, we aimed to improve the estimates of disease prevalence on a population level. Methods: This study included laboratory-confirmed cases of influenza in adult and pediatric patients admitted to any acute-care hospital in a catchment area of 8 counties Tennessee identified between October 1, 2012, and April 30, 2019. Surveillance information was abstracted from hospital and state laboratory databases, hospital infection control practitioner databases, reportable condition databases, and electronic health records as a part of the Influenza Hospitalization Surveillance Network (FluSurv-NET) by the Centers for Disease Control and Prevention (CDC) Emerging Infections Program (EIP). Cases were defined as healthcare-associated influenza laboratory confirmation of infection occurred (1) on or after hospital day 4 (“traditional definition”), or (2) between hospital days 0 and 3 in patients transferred from a chronic care facility or with a recent discharge from another acute-care facility in the 7 days preceding the current index admission (ie, enhanced definition). The proportion of laboratory-confirmed influenza designated as HA-VRI using both the traditional definition as well as with the added enhanced definition were compared. Data were imported into Stata software for analysis. Results: We identified 5,904 cases of laboratory-confirmed influenza in hospitalized patients over the study period. Using the traditional definition for HA-VRI, only 147 (2.5%, seasonal range 1.3%–3.4%) were deemed healthcare associated (Figure 1). Adding the cases identified using the enhanced definition, an additional 317 (5.4%, range 2.3%–6.7%) cases were noted in patients transferred from a chronic care facility for the current acute-care admission and 336 cases (5.7%; range, 4.1%–7.4%) were noted in patients with a prior acute-care facility admission in the preceding 7 days. Using our expanded definition, the total proportion of healthcare-associated influenza in this cohort was 772 of 5,904 (13.1%; range, 10.6%–14.8%). Conclusion: HA-VRI due to influenza is an underrecognized infection in hospitalized patients. Limiting surveillance assessment of this important outcome to just those patients with a positive influenza test after hospital day 3 captured only 19% of possible healthcare-associated influenza infections across 7 influenza seasons. These results suggest that the traditionally used definitions of healthcare-associated influenza underestimate the true burden of cases.
The earliest stage in the innovation lifecycle, problem formulation, is crucial for setting direction in an innovation effort. When faced with an interesting problem, engineers commonly assume the approximate solution area and focus on ideating innovative solutions. However, in this project, NASA and their contracted partner, Accenture, collaboratively conducted problem discovery to ensure that solutioning efforts were focused on the right problems, for the right users, and addressing the most critical needs—in this case, exploring weather tolerant operations (WTO) to further urban air mobility (UAM) – known as UAM WTO. The project team leveraged generative, qualitative methods to understand the ecosystem, players, and where challenges in the industry are inhibiting development. The complexity of the problem area required that the team constantly observe and iterate on problem discovery, effectively “designing the design process.” This paper discusses the approach, methodologies, and selected results, including significant insights on the application of early-stage design methodologies to a complex, system-level problem.
An accurate estimate of the average number of hand hygiene opportunities per patient hour (HHO rate) is required to implement group electronic hand hygiene monitoring systems (GEHHMSs). We sought to identify predictors of HHOs to validate and implement a GEHHMS across a network of critical care units.
Design:
Multicenter, observational study (10 hospitals) followed by quality improvement intervention involving 24 critical care units across 12 hospitals in Ontario, Canada.
Methods:
Critical care patient beds were randomized to receive 1 hour of continuous direct observation to determine the HHO rate. A Poisson regression model determined unit-level predictors of HHOs. Estimates of average HHO rates across different types of critical care units were derived and used to implement and evaluate use of GEHHMS.
Results:
During 2,812 hours of observation, we identified 25,417 HHOs. There was significant variability in HHO rate across critical care units. Time of day, day of the week, unit acuity, patient acuity, patient population and use of transmission-based precautions were significantly associated with HHO rate. Using unit-specific estimates of average HHO rate, aggregate HH adherence was 30.0% (1,084,329 of 3,614,908) at baseline with GEHHMS and improved to 38.5% (740,660 of 1,921,656) within 2 months of continuous feedback to units (P < .0001).
Conclusions:
Unit-specific estimates based on known predictors of HHO rate enabled broad implementation of GEHHMS. Further longitudinal quality improvement efforts using this system are required to assess the impact of GEHHMS on both HH adherence and clinical outcomes within critically ill patient populations.
Fundamental knowledge about the processes that control the functioning of the biophysical workings of ecosystems has expanded exponentially since the late 1960s. Scientists, then, had only primitive knowledge about C, N, P, S, and H2O cycles; plant, animal, and soil microbialinteractions and dynamics; and land, atmosphere, and water interactions. With the advent of systems ecology paradigm (SEP) and the explosion of technologies supporting field and laboratory research, scientists throughout the world were able to assemble the knowledge base known today as ecosystem science. This chapter describes, through the eyes of scientists associated with the Natural Resource Ecology Laboratory (NREL) at Colorado State University (CSU), the evolution of the SEP in discovering how biophysical systems at small scales (ecological sites, landscapes) function as systems. The NREL and CSU are epicenters of the development of ecosystem science. Later, that knowledge, including humans as components of ecosystems, has been applied to small regions, regions, and the globe. Many research results that have formed the foundation for ecosystem science and management of natural resources, terrestrial environments, and its waters are described in this chapter. Throughout are direct and implicit references to the vital collaborations with the global network of ecosystem scientists.
Among 353 healthcare personnel in a longitudinal cohort in 4 hospitals in Atlanta, Georgia (May–June 2020), 23 (6.5%) had severe acute respiratory coronavirus virus 2 (SARS-CoV-2) antibodies. Spending >50% of a typical shift at the bedside (OR, 3.4; 95% CI, 1.2–10.5) and black race (OR, 8.4; 95% CI, 2.7–27.4) were associated with SARS-CoV-2 seropositivity.
With human influences driving populations of apex predators into decline, more information is required on how factors affect species at national and global scales. However, camera-trap studies are seldom executed at a broad spatial scale. We demonstrate how uniting fine-scale studies and utilizing camera-trap data of non-target species is an effective approach for broadscale assessments through a case study of the brown hyaena Parahyaena brunnea. We collated camera-trap data from 25 protected and unprotected sites across South Africa into the largest detection/non-detection dataset collected on the brown hyaena, and investigated the influence of biological and anthropogenic factors on brown hyaena occupancy. Spatial autocorrelation had a significant effect on the data, and was corrected using a Bayesian Gibbs sampler. We show that brown hyaena occupancy is driven by specific co-occurring apex predator species and human disturbance. The relative abundance of spotted hyaenas Crocuta crocuta and people on foot had a negative effect on brown hyaena occupancy, whereas the relative abundance of leopards Panthera pardus and vehicles had a positive influence. We estimated that brown hyaenas occur across 66% of the surveyed camera-trap station sites. Occupancy varied geographically, with lower estimates in eastern and southern South Africa. Our findings suggest that brown hyaena conservation is dependent upon a multi-species approach focussed on implementing conservation policies that better facilitate coexistence between people and hyaenas. We also validate the conservation value of pooling fine-scale datasets and utilizing bycatch data to examine species trends at broad spatial scales.
Antarctica's ice shelves modulate the grounded ice flow, and weakening of ice shelves due to climate forcing will decrease their ‘buttressing’ effect, causing a response in the grounded ice. While the processes governing ice-shelf weakening are complex, uncertainties in the response of the grounded ice sheet are also difficult to assess. The Antarctic BUttressing Model Intercomparison Project (ABUMIP) compares ice-sheet model responses to decrease in buttressing by investigating the ‘end-member’ scenario of total and sustained loss of ice shelves. Although unrealistic, this scenario enables gauging the sensitivity of an ensemble of 15 ice-sheet models to a total loss of buttressing, hence exhibiting the full potential of marine ice-sheet instability. All models predict that this scenario leads to multi-metre (1–12 m) sea-level rise over 500 years from present day. West Antarctic ice sheet collapse alone leads to a 1.91–5.08 m sea-level rise due to the marine ice-sheet instability. Mass loss rates are a strong function of the sliding/friction law, with plastic laws cause a further destabilization of the Aurora and Wilkes Subglacial Basins, East Antarctica. Improvements to marine ice-sheet models have greatly reduced variability between modelled ice-sheet responses to extreme ice-shelf loss, e.g. compared to the SeaRISE assessments.