To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure firstname.lastname@example.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The third edition of this practical introduction to Python has been thoroughly updated, with all code migrated to Jupyter notebooks. The notebooks are available online with executable versions of all of the book's content (and more). The text starts with a detailed introduction to the basics of the Python language, without assuming any prior knowledge. Building upon each other, the most important Python packages for numerical math (NumPy), symbolic math (SymPy), and plotting (Matplotlib) are introduced, with brand new chapters covering numerical methods (SciPy) and data handling (Pandas). Further new material includes guidelines for writing efficient Python code and publishing code for other users. Simple and concise code examples, revised for compatibility with Python 3, guide the reader and support the learning process throughout the book. Readers from all of the quantitative sciences, whatever their background, will be able to quickly acquire the skills needed for using Python effectively.
Dental healthcare personnel (DHCP) are at high risk of exposure to coronavirus disease 2019 (COVID-19). We sought to identify how DHCP changed their use of personal protective equipment (PPE) as a result of the COVID-19 pandemic, and to pilot an educational video designed to improve knowledge of proper PPE use.
The study comprised 2 sets of semistructured qualitative interviews.
The study was conducted in 8 dental clinics in a Midwestern metropolitan area.
In total, 70 DHCP participated in the first set of interviews; 63 DHCP participated in the second set of interviews.
In September–November 2020 and March–October 2021, we conducted 2 sets of semistructured interviews: (1) PPE use in the dental community during COVID-19, and (2) feedback on the utility of an educational donning and doffing video.
Overall, 86% of DHCP reported having prior training. DHCP increased the use of PPE during COVID-19, specifically N95 respirators and face shields. DHCP reported real-world challenges to applying infection control methods, often resulting in PPE modification and reuse. DHCP reported double masking and sterilization methods to extend N95 respirator use. Additional challenges to PPE included shortages, comfort or discomfort, and compatibility with specialty dental equipment. DHCP found the educational video helpful and relevant to clinical practice. Fewer than half of DHCP reported exposure to a similar video.
DHCP experienced significant challenges related to PPE access and routine use in dental clinics during the COVID-19 pandemic. An educational video improved awareness and uptake of appropriate PPE use among DHCP.
To perform a statewide characteristics and outcomes analysis of the Trisomy 18 (T18) population and explore the potential impact of associated congenital heart disease (CHD) and congenital heart surgery.
Retrospective review of the Texas Hospital Inpatient Discharge Public Use Data File between 2009 and 2019, analysing discharges of patients with T18 identified using ICD-9/10 codes. Discharges were linked to analyse patients. Demographic characteristics and available outcomes were evaluated. The population was divided into groups for comparison: patients with no documentation of CHD (T18NoCHD), patients with CHD without congenital heart surgery (T18CHD), and patients who underwent congenital heart surgery (T18CHS).
One thousand one hundred fifty-six eligible patients were identified: 443 (38%) T18NoCHD, 669 (58%) T18CHD, and 44 (4%) T18CHS. T18CHS had a lower proportion of Hispanic patients (n = 9 (20.45%)) compared to T18CHD (n = 315 (47.09%)), and T18NoCHD (n = 219 (49.44%)) (p < 0.001 for both). Patients with Medicare/Medicaid insurance had a 0.42 odds ratio (95%CI: 0.20–0.86, p = 0.020) of undergoing congenital heart surgery compared to private insurance. T18CHS had a higher median total days in-hospital (47.5 [IQR: 12.25–113.25] vs. 9 [IQR: 3–24] and 2 [IQR: 1–5], p < 0.001); and a higher median number of admissions (n = 2 [IQR: 1–4]) vs. 1 [IQR: 1–2] and 1 [IQR: 1–1], (p < 0.001 for both). However, the post-operative median number of admissions for T18CHS was 0 [IQR: 0–2]. After the first month of life, T18CHS had freedom from in-hospital mortality similar to T18NoCHD and superior to T18CHD.
Short-term outcomes for T18CHS patients are encouraging, suggesting a freedom from in-hospital mortality that resembles the T18NoCHD. The highlighted socio-economic differences between the groups warrant further investigation. Development of a prospective registry for T18 patients should be a priority for better understanding of longer-term outcomes.
Social scientists commonly seek to make statements about how word use varies over circumstances—including time, partisan identity, or some other document-level covariate. For example, researchers might wish to know how Republicans and Democrats diverge in their understanding of the term “immigration.” Building on the success of pretrained language models, we introduce the à la carte on text (conText) embedding regression model for this purpose. This fast and simple method produces valid vector representations of how words are used—and thus what words “mean”—in different contexts. We show that it outperforms slower, more complicated alternatives and works well even with very few documents. The model also allows for hypothesis testing and statements about statistical significance. We demonstrate that it can be used for a broad range of important tasks, including understanding US polarization, historical legislative development, and sentiment detection. We provide open-source software for fitting the model.
To evaluate variables that affect risk of contamination for endoscopic retrograde cholangiopancreatography and endoscopic ultrasound endoscopes.
Observational, quality improvement study.
University medical center with a gastrointestinal endoscopy service performing ∼1,000 endoscopic retrograde cholangiopancreatography and ∼1,000 endoscopic ultrasound endoscope procedures annually.
Duodenoscope and linear echoendoscope sampling (from the elevator mechanism and instrument channel) was performed from June 2020 through September 2021. Operational changes during this period included standard reprocessing with high-level disinfection with ethylene oxide gas sterilization (HLD–ETO) was switched to double high-level disinfection (dHLD) (June 16, 2020–July 15, 2020), and duodenoscopes changed to disposable tip model (March 2021). The frequency of contamination for the co-primary outcomes were characterized by calculated risk ratios.
The overall pathogenic contamination rate was 4.72% (6 of 127). Compared to duodenoscopes, linear echoendoscopes had a contamination risk ratio of 3.64 (95% confidence interval [CI], 0.69–19.1). Reprocessing using HLD-ETO was associated with a contamination risk ratio of 0.29 (95% CI, 0.06–1.54). Linear echoendoscopes undergoing dHLD had the highest risk of contamination (2 of 18, 11.1%), and duodenoscopes undergoing HLD-ETO and the lowest risk of contamination (0 of 53, 0%). Duodenoscopes with a disposable tip had a 0% contamination rate (0 of 27).
We did not detect a significant reduction in endoscope contamination using HLD-ETO versus dHLD reprocessing. Linear echoendoscopes have a risk of contamination similar to that of duodenoscopes. Disposable tips may reduce the risk of duodenoscope contamination.
Management of total anomalous pulmonary venous connections has been extensively studied to further improve outcomes. Our institution previously reported factors associated with mortality, recurrent obstruction, and reintervention. The study purpose was to revisit the cohort of patients and evaluate factors associated with reintervention, and mortality in early and late follow-up.
A retrospective review at our institution identified 81 patients undergoing total anomalous pulmonary venous connection repair from January 2002 to January 2018. Demographic and operative variables were evaluated. Anastomotic reintervention (interventional or surgical) and/or mortality were primary endpoints.
Eighty-one patients met the study criteria. Follow-up ranged from 0 to 6,291 days (17.2 years), a mean of 1263 days (3.5 years). Surgical mortality was 16.1% and reintervention rates were 19.8%. In re-interventions performed, 80% occurred within 1.2 years, while 94% of mortalities were within 4.1 months. Increasing cardiopulmonary bypass times (p = 0.0001) and the presence of obstruction at the time of surgery (p = 0.025) were predictors of mortality, while intracardiac total anomalous pulmonary venous connection type (p = 0.033) was protective. Risk of reintervention was higher with increasing cardiopulmonary bypass times (p = 0.015), single ventricle anatomy (p = 0.02), and a post-repair gradient >2 mmHg on transesophageal echocardiogram (p = 0.009).
Evaluation of a larger cohort with longer follow-up demonstrated the relationship of anatomic complexity and symptoms at presentation to increased mortality risk after total anomalous pulmonary venous connection repair. The presence of a single ventricle or a post-operative confluence gradient >2 mmHg were risk factors for reintervention. These findings support those found in our initial study.
Growing public concern regarding animal welfare and consumer demand for humanely produced products have placed pressure on the meat, wool and dairy industries to improve and confirm the welfare status of their animals. This has increased the need for reliable methods of assessing animal welfare during commercial farm practices. The measurement of the stress caused by commercial farm practices is a major component of animal welfare assessment. However, a major issue for animal welfare science is that many of the techniques used to measure stress involve invasive procedures, such as blood sampling, which may themselves cause a stress response and therefore affect the measurement of interest. To reduce this problem, a number of non-invasive or minimally invasive methods and devices have been developed to measure stress. These include the measurement of cortisol concentrations in saliva and faeces, and remote devices for recording body temperature, heart rate and the collection of blood samples. This review describes the benefits and limitations of some of these methods for measuring stress. In particular, the review focuses on recent advances and current research in the use of infrared thermography (IRT) for measuring stress. Specific applications for IRT in the dairy and beef industries are also described including an automated, non-invasive system for early diagnosis of infection in cattle. It is essential that non-invasive measures of acute and chronic stress are developed for reliable assessment of animal welfare during standard farm management practices and IRT may be a useful tool for this purpose. IRT may offer advantages over many other non-invasive systems as it appears to be capable of measuring different components of the stress axis, including acute sympathetic and hypothalamic-pituitary-adrenocortical responses.
Two experiments were conducted to determine whether maximum eye temperature, measured using infrared thermography (IRT), could be a non-invasive technique for detecting responses of cattle to handling procedures. Experiment one used six crossbred heifers randomly assigned to two groups in a crossover design and subjected to i) being hit with a plastic tube on the rump and ii) being startled by the sudden waving of a plastic bag. Experiment two used 32 crossbred bulls randomly assigned to three treatments: i) control, restraint only; ii) electric prod, two brief applications of an electric prod or, iii) startled, as in experiment one, accompanied by shouting. Exit speed (m s−1) was recorded on release from the restraint. Maximum eye temperature was recorded continuously pre- and post-treatment. In experiment one, eye temperature dropped rapidly between 20 and 40 s following both treatments and returned to baseline between 60 and 80 s following hitting and between 100 and 120 s following startling. In experiment two, eye temperature dropped between 0 and 20 s, following both treatments, and returned to baseline by 180 s, following startling plus shouting, but did not return to baseline for five minutes following electric prod. Exit speed tended to be faster following the electric prod. In conclusion, IRT detected responses that were due possibly to fear and/or pain associated with the procedures and may therefore be a useful, non-invasive method for assessing aversiveness of handling practices to cattle.
There is a need to assess the welfare of dairy cows that live outdoors under cold and wet conditions. This study combined a number of techniques to measure stress and make an assessment of welfare in this situation. Two groups of ten non-pregnant, non-lactating Holstein Friesian cows were exposed to a week of wind and rain (WR) or housed indoors (I) with pre- and post-treatment weeks indoors in a cross-over design. Wind and rain consisted of continual air movement (7.1 kph) using fans, water sprinkling for 15 min (3.0 mm) per hour, a mean temperature of 3.4°C and wind chill of –0.3°C. Internal body temperature was recorded every ten min and behaviour for 16 h per day. Blood, faeces and infrared temperatures were sampled at 0800h each morning during treatment weeks, and three times per week during pre- and post-treatment weeks. All cows were challenged with 2 ml Leptoshield Vaccine (CSL Animal Health, Australia) subcutaneously after 3 days of cold exposure to test immune responses. During WR, cows spent a greater proportion of time standing and less time lying down and eating than during I. Infrared temperatures were lower during WR than I in both dorsal and orbital (eye) regions. There was a distinct diurnal pattern of internal body temperature which had a greater amplitude during WR than I resulting from both a lower minimum and a higher maximum. The time of the minimum was 40 min later for WR than I. The overall mean body temperature was 0.07°C higher in WR than I. There were greater increases in plasma and faecal cortisol during WR than I, respectively. Total T4 was higher during WR than I. Non-esterified fatty acid concentration was higher in the week following WR than I. Total white blood cell numbers were lower during WR than I. No treatment differences were found for creatine kinase or for tumour necrosis factor, heat shock protein 90, interleukin 6 or interferon gamma expression in response to vaccination. In conclusion, this study applied a suite of stress measures to dairy cows exposed to extreme cold and wet conditions. Together, these measures indicated activation of the stress axis, physiological and behavioural adaptations to cold and a reduction in welfare. A number of these measures could be used to assess welfare under cold conditions on farms.
Ear posture, or the frequency of postural changes, may reflect various emotional states of animals. In adult sheep (Ovis aries), the ‘forward’ ear posture has been associated with negative experiences whereas the ‘plane’ posture has been associated with positive ones. This study aimed to see whether ear postures related to the experience of pain in lambs. The ear behaviour of four to eight week-old lambs (n = 44) was measured before and after tail-docking using a rubber ring. Each lamb was docked and its behaviour recorded while in the company of an observer lamb of similar age; each acted once as focal (docked) lamb and once as observer within the same pair. Lambs were docked in one of two rounds, so that half were docked in their first exposure to the test environment and half in their second exposure. Tail-docking was associated with an increase in the proportion of time spent with ears backward and decreases in the proportion of time spent with ears plane and forward (mean [± SEM]: Backward: pre 0.12 [± 0.04], post 0.56 [± 0.04]; Plane: pre 0.55 [± 0.05], post 0.19 [± 0.05]; Forward: pre 0.27 [± 0.04], post 0.18 [± 0.04]). There was also a significant increase in the number of changes between ear postures after docking (pre 5.63 [± 0.66], post 9.11 [± 0.66]). Over both periods, female lambs held their ears asymmetrically for longer than males (mean of ranks [± SEM] [raw proportion of time]: Females 52.14 [± 3.44] [0.09 (± 0.01)], males 37.54 [± 3.40] [0.05 (± 0.01)]). This is the first study to demonstrate changes in the ear posture of lambs associated with the negative experience of pain. Ear posture is a non-invasive indicator of physical pain in lambs and may be useful for evaluating potential welfare compromise.
Using capture-recapture analysis we estimate the effective size of the active Amazon Mechanical Turk (MTurk) population that a typical laboratory can access to be about 7,300 workers. We also estimate that the time taken for half of the workers to leave the MTurk pool and be replaced is about 7 months. Each laboratory has its own population pool which overlaps, often extensively, with the hundreds of other laboratories using MTurk. Our estimate is based on a sample of 114,460 completed sessions from 33,408 unique participants and 689 sessions across seven laboratories in the US, Europe, and Australia from January 2012 to March 2015.
Limited scientific literature is available for developing ‘best practice’ guidelines for the management of dairy goats (Capra hircus), particularly goat kids. Disbudding practices for kids and calves appear to be similar; however, it is important to recognise that kids are not small calves. Disbudding causes pain and is performed on calves and kids — welfare concerns surrounding disbudding affect both industries. In this review, we evaluate literature on disbudding of kids and calves and compare methodologies across the two species. In addition, we catalogue behavioural and physiological responses to disbudding and, finally, review alternatives to disbudding (or refinements). Although there may be certain similarities between the response of goat kids and calves to cautery disbudding, it is important to highlight the differences that do exist between the species to reduce the risk of potential detrimental effects (eg brain injury). Cautery disbudding is the most common and efficacious method of disbudding kids and calves; however, kids have thinner skulls and are disbudded at a younger age, which can increase the risk of thermal injury to the brain. Kids and calves show behavioural and physiological responses indicative of pain; however, variability in these responses between studies are likely due to differences in disbudding methodologies, study design and within-species variation. Effective pain mitigation strategies may differ across species; therefore, future research is needed to optimise pain mitigation strategies for kids. Currently, alternatives to cautery disbudding including: (i) selection for polled animals; (ii) managing horned animals; or (iii) the development of novel disbudding methods (eg cryosurgery, clove oil injection) have been deemed unsuitable by the industries as the methods are either impracticable or ineffective. Therefore, if disbudding is to continue, species-appropriate pain mitigation strategies need to be refined. Establishing best practice guidelines for disbudding kids requires managers to recognise that they are not small calves.
We examined the effects of daily positive or negative human handling on the behaviour of Holstein-Friesian dairy calves (n = 20 calves per treatment, five calves per group). The response to humans and indicators of positive emotions were examined at four weeks of age. Calves that received positive handling approached a familiar handler within 1 min in 50% of the handling sessions compared to 17% of the sessions for negatively handled calves but showed no difference when approaching an unfamiliar person. Calves that received positive handling showed less avoidance behaviour in their home pen to an approaching unfamiliar person (score, positive: 3.7, negative: 2.8) but there was no treatment effect on flight distance when tested outside the home pen. Both treatment groups responded similarly to a novel object and performed the same amount of play behaviour. Calves that received positive handling interacted more with cow brushes than calves that received negative handling (positive: 9.9%, negative: 7.9% of the total time). At three months of age, avoidance behaviour was re-tested, this time including 20 control animals of the same breed and age, reared routinely on-farm. Controls showed more avoidance behaviour (positive: 1.5, negative: 1.0, control: 0.3) and had a greater flight distance (positive: 3.3 m, negative: 3.7 m, control: 4.9 m). The results confirm existing literature demonstrating that the quantity and quality of handling influence the response towards humans. Little evidence was found that the type of early handling influences behaviours indicative of positive emotions.
Racially and ethnically minoritized populations have been historically excluded and underrepresented in research. This paper will describe best practices in multicultural and multilingual awareness-raising strategies used by the Recruitment Innovation Center to increase minoritized enrollment into clinical trials. The Passive Immunity Trial for Our Nation will be used as a primary example to highlight real-world application of these methods to raise awareness, engage community partners, and recruit diverse study participants.
Prehistoric shell mounds can be useful for the quantification of the radiocarbon marine reservoir effect (MRE) and, at the same time, knowledge about the MRE allows for the establishment of robust chronologies for these sites. This creates a loop in which the archaeological setting has a dual role: it is part of both the method and the application. Therefore, it is paramount to address these sites from both archaeological and environmental perspectives, investigating their origin and diagenesis in order to overcome biases caused by post-depositional alterations. In this study, samples of bone, charcoal and shell from a Late Holocene shell mound in Southern Brazil, the Sambaqui de Cabeçuda, were analyzed following a multidisciplinary approach to disentangle the complex relationships between archaeology and the environment. We performed X-ray diffraction, radiocarbon dating, stable isotopes (δ13C, δ18O, δ15N) and anthracology analyses as well as Bayesian Chronological Models and Isotope Mixing Models to assess the local MRE and to reconstruct the diet of Cabeçuda builders. Our results reveal a negative local correction for the MRE (ΔR = –263 ± 46 14C yr), expected for the lagoon next to the site, and diets with considerable intakes of marine proteins. We examine the implications of these results for the chronology of the site and discuss a series of complications when performing MRE studies using shell mound sites.
Major Depressive Disorder (MDD) is prevalent, often chronic, and requires ongoing monitoring of symptoms to track response to treatment and identify early indicators of relapse. Remote Measurement Technologies (RMT) provide an exciting opportunity to transform the measurement and management of MDD, via data collected from inbuilt smartphone sensors and wearable devices alongside app-based questionnaires and tasks.
To describe the amount of data collected during a multimodal longitudinal RMT study, in an MDD population.
RADAR-MDD is a multi-centre, prospective observational cohort study. People with a history of MDD were provided with a wrist-worn wearable, and several apps designed to: a) collect data from smartphone sensors; and b) deliver questionnaires, speech tasks and cognitive assessments and followed-up for a maximum of 2 years.
A total of 623 individuals with a history of MDD were enrolled in the study with 80% completion rates for primary outcome assessments across all timepoints. 79.8% of people participated for the maximum amount of time available and 20.2% withdrew prematurely. Data availability across all RMT data types varied depending on the source of data and the participant-burden for each data type. We found no evidence of an association between the severity of depression symptoms at baseline and the availability of data. 110 participants had > 50% data available across all data types, and thus able to contribute to multiparametric analyses.
RADAR-MDD is the largest multimodal RMT study in the field of mental health. Here, we have shown that collecting RMT data from a clinical population is feasible.
In this survey of 41 hospitals, 18 (72%) of 25 respondents reporting utilization of National Healthcare Safety Network resources demonstrated accurate central-line–associated bloodstream infection reporting compared to 6 (38%) of 16 without utilization (adjusted odds ratio, 5.37; 95% confidence interval, 1.16–24.8). Adherence to standard definitions is essential for consistent reporting across healthcare facilities.