To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This study estimates the incubation period of COVID-19 among locally transmitted cases, and its association with age to better inform public health measures in containing COVID-19. Epidemiological data of all PCR-confirmed COVID-19 cases from all restructured hospitals in Singapore were collected between 23 January 2020 and 2 April 2020. Activity mapping and detailed epidemiological investigation were conducted by trained personnel. Positive cases without clear exposure to another positive case were excluded from the analysis. One hundred and sixty-four cases (15.6% of patients) met the inclusion criteria during the defined period. The crude median incubation period was 5 days (range 1–12 days) and median age was 42 years (range 5–79 years). The median incubation period among those 70 years and older was significantly longer than those younger than 70 years (8 vis-à-vis 5 days, P = 0.040). Incubation period was negatively correlated with day of illness in both groups. These findings support current policies of 14-day quarantine periods for close contacts of confirmed cases and 28 days for monitoring infections in known clusters. An elderly person who may have a longer incubation period than a younger counterpart may benefit from earlier and proactive testing, especially after exposure to a positive case.
Eggs are considered a high-quality protein source for their complete amino acid profile and digestibility. Therefore, this study aimed to compare the effects of whole egg (WE) v. egg white (EW) ingestion during 12 weeks of resistance training (RT) on the skeletal muscle regulatory markers and body composition in resistance-trained men. Thirty resistance-trained men (mean age 24·6 (sd 2·7) years) were randomly assigned into the WE + RT (WER, n 15) or EW + RT (EWR, n 15) group. The WER group ingested three WE, while the EWR group ingested an isonitrogenous quantity of six EW per d immediately after the RT session. Serum concentrations of regulatory markers and body composition were measured at baseline and after 12 weeks. Significant main effects of time were observed for body weight (WER 1·7, EWR 1·8 kg), skeletal muscle mass (WER 2·9, EWR 2·7 kg), fibroblast growth factor-2 (WER 116·1, EWR 83·2 pg/ml) and follistatin (WER 0·05, EWR 0·04 ng/ml), which significantly increased (P < 0·05), and for fat mass (WER –1·9, EWR –1·1 kg), transforming growth factor-β1 (WER –0·5, EWR −0·1 ng/ml), activin A (WER –6·2, EWR –4·5 pg/ml) and myostatin (WER –0·1, EWR –0·06 ng/ml), which significantly decreased (P < 0·05) in both WER and EWR groups. The consumption of eggs absent of yolk during chronic RT resulted in similar body composition and functional outcomes as WE of equal protein value. EW or WE may be used interchangeably for the dietary support of RT-induced muscular hypertrophy when protein intake is maintained.
A significant proportion of inpatient antimicrobial prescriptions are inappropriate. Post-prescription review with feedback has been shown to be an effective means of reducing inappropriate antimicrobial use. However, implementation is resource intensive. Our aim was to evaluate the performance of traditional statistical models and machine-learning models designed to predict which patients receiving broad-spectrum antibiotics require a stewardship intervention.
We performed a single-center retrospective cohort study of inpatients who received an antimicrobial tracked by the antimicrobial stewardship program. Data were extracted from the electronic medical record and were used to develop logistic regression and boosted-tree models to predict whether antibiotic therapy required stewardship intervention on any given day as compared to the criterion standard of note left by the antimicrobial stewardship team in the patient’s chart. We measured the performance of these models using area under the receiver operating characteristic curves (AUROC), and we evaluated it using a hold-out validation cohort.
Both the logistic regression and boosted-tree models demonstrated fair discriminatory power with AUROCs of 0.73 (95% confidence interval [CI], 0.69–0.77) and 0.75 (95% CI, 0.72–0.79), respectively (P = .07). Both models demonstrated good calibration. The number of patients that would need to be reviewed to identify 1 patient who required stewardship intervention was high for both models (41.7–45.5 for models tuned to a sensitivity of 85%).
Complex models can be developed to predict which patients require a stewardship intervention. However, further work is required to develop models with adequate discriminatory power to be applicable to real-world antimicrobial stewardship practice.
Previous research has suggested an association between depression and subsequent acute stroke incidence, but few studies have examined any effect modification by sociodemographic factors. In addition, no studies have investigated this association among primary care recipients with hypertension.
We examined the anonymized records of all public general outpatient visits by patients aged 45+ during January 2007–December 2010 in Hong Kong to extract primary care patients with hypertension for analysis. We took the last consultation date as the baseline and followed them up for 4 years (until 2011–2014) to observe any subsequent acute hospitalization due to stroke. Mixed-effects Cox models (random intercept across 74 included clinics) were implemented to examine the association between depression (ICPC diagnosis or anti-depressant prescription) at baseline and the hazard of acute stroke (ICD-9: 430–437.9). Effect modification by age, sex, and recipient status of social security assistance was examined in extended models with respective interaction terms specified.
In total, 396 858 eligible patients were included, with 9099 (2.3%) having depression, and 10 851 (2.7%) eventually hospitalized for stroke. From the adjusted analysis, baseline depression was associated with a 17% increased hazard of acute stroke hospitalization [95% confidence interval (CI) 1.03–1.32]. This association was suggested to be even stronger among men than among women (hazard ratio = 1.29, 95% CI 1.00–1.67).
Depression is more strongly associated with acute stroke incidence among male than female primary care patients with hypertension. More integrated services are warranted to address their needs.
The present study aimed to compare the effects of drinking different types of coffee before a high-glycaemic index (GI) meal on postprandial glucose metabolism and to assess the effects of adding milk and sugar into coffee. In this randomised, crossover, acute feeding study, apparently healthy adults (n 21) consumed the test drink followed by a high-GI meal in each session. Different types of coffee (espresso, instant, boiled and decaffeinated, all with milk and sugar) and plain water were tested in separate sessions, while a subset of the participants (n 10) completed extra sessions using black coffees. Postprandial levels of glucose, insulin, active glucagon-like peptide 1 (GLP-1) and nitrotyrosine between different test drinks were compared using linear mixed models. Results showed that only preloading decaffeinated coffee with milk and sugar led to significantly lower glucose incremental AUC (iAUC; 14 % lower, P = 0·001) than water. Preloading black coffees led to greater postprandial glucose iAUC than preloading coffees with milk and sugar added (12–35 % smaller, P < 0·05 for all coffee types). Active GLP-1 and nitrotyrosine levels were not significantly different between test drinks. To conclude, preloading decaffeinated coffee with milk and sugar led to a blunted postprandial glycaemic response after a subsequent high-GI meal, while adding milk and sugar into coffee could mitigate the impairment effect of black coffee towards postprandial glucose responses. These findings may partly explain the positive effects of coffee consumption on glucose metabolism.
Introduction: Providing comfort care support at home without transport to hospital has not traditionally been part of paramedic practice. The innovative Paramedics Providing Palliative Care at Home Program includes a new clinical practice guideline, medications, a database to share goals of care, and palliative care training. This study aimed to determine essential elements for scale and spread of this model of care through the application of an implementation science model, the Consolidated Framework for Implementation Research (CFIR). Methods: Deliberative dialogue sessions were held with paramedic, palliative care, primary care, and administrative experts in a province that had the Program (Nova Scotia, March 2018) and one that had not (British Columbia, July 2018). Sessions were audio recorded and transcribed. The CFIR was used as the foundation for a framework analysis, which was conducted by four team members independently. Themes were derived by consensus with the broader research team. Results: Inter-sectoral communication between paramedics and other health care providers was key, and challenging due to privacy concerns. Relationships with health care providers are critical to promoting the new model of care to patients, managing expectations, and providing follow up/ongoing care. Training was an essential characteristic of the intervention that can be adapted to suit local needs, although cost is a factor. There were challenges due to the culture and implementation climate as a shift in the mindset of paramedics away from traditional roles is required to implement the model. Paramedic champions can play an important role in shifting the mindset of paramedics towards a new way of practice Conclusion: The CFIR construct of cosmopolitanism, emphasizing the importance of breaking down silos and engaging diverse stakeholders, emerged as one of the most important. This will be helpful for successful scale and spread of the program.
In an attempt to assess the universal applicability of the International Classification of Disease (ICD-10), two psychiatrists from different socio-cultural backgrounds and training independently performed a chart review of 238 Chinese patients. Inter-rater reliability figures were comparable to those found in the WHO-coordinated ICD-10 field trials. The results suggest that ICD-10 has good ‘universality’ in routine clinical practice.
General themes were extracted from qualitative interviews.
Although experts confirmed the importance of evaluating the clinical and cost-effectiveness of treatments as part of a sequence, the current HTA decision making framework is not conducive to this. Developing an RRMS treatment-sequencing model that meets HTA requirements is difficult, in particular due to scarcity of effectiveness data in later treatment lines.
At present, a treatment-sequencing model for RRMS may be desirable yet not requested by HTA bodies for their decision making. However, there could be other areas where a treatment-sequencing model for RRMS is of use.
Approximately one in every five Singaporean households employs Foreign Domestic Workers (FDWs) (Humanitarian Organization for Migration Economics [Home], 2015). Mental health problems, especially depression, are prevalent among FDWs in Singapore (HOME, 2015). Yet, there is a lack of empirically-supported interventions to address their mental health needs.
To train FDWs as mental health paraprofessionals with selected CBT skills for depression, which may enable them to provide basic assistance to their fellow domestic workers with depressive symptoms.
To present and assess the effectiveness and acceptability of a 4 weekly 3-hour group CBT-based paraprofessional training program for FDWs.
Participants were randomized into either an intervention or a wait-list control group. Participants in the wait-list group received the training after the intervention group completed the training. Both groups completed questionnaires assessing attitudes towards seeking psychological help; stigma towards people with depression; self-confidence in delivering CBT; general self-efficacy; knowledge of depression and CBT before, immediately after, and two months following the training.
Thirty-eight out of 40 participants completed the program. Both groups did not differ on changes in any of the outcome variables. However, within-group analyses showed improved attitudes towards seeking professional health for mental health issues; greater depression literacy; and CBT knowledge following the training. These changes were sustained at 2-month follow-up. All participants indicated high level of satisfaction with the program.
These preliminary results highlight the potential effectiveness and feasibility of implementing the training as a stepped-care mental health service to address the high rate of depression among the FDW community.
Disclosure of interest
The authors have not supplied their declaration of competing interest.
To describe the infection control preparedness measures undertaken for coronavirus disease (COVID-19) due to SARS-CoV-2 (previously known as 2019 novel coronavirus) in the first 42 days after announcement of a cluster of pneumonia in China, on December 31, 2019 (day 1) in Hong Kong.
A bundled approach of active and enhanced laboratory surveillance, early airborne infection isolation, rapid molecular diagnostic testing, and contact tracing for healthcare workers (HCWs) with unprotected exposure in the hospitals was implemented. Epidemiological characteristics of confirmed cases, environmental samples, and air samples were collected and analyzed.
From day 1 to day 42, 42 of 1,275 patients (3.3%) fulfilling active (n = 29) and enhanced laboratory surveillance (n = 13) were confirmed to have the SARS-CoV-2 infection. The number of locally acquired case significantly increased from 1 of 13 confirmed cases (7.7%, day 22 to day 32) to 27 of 29 confirmed cases (93.1%, day 33 to day 42; P < .001). Among them, 28 patients (66.6%) came from 8 family clusters. Of 413 HCWs caring for these confirmed cases, 11 (2.7%) had unprotected exposure requiring quarantine for 14 days. None of these was infected, and nosocomial transmission of SARS-CoV-2 was not observed. Environmental surveillance was performed in the room of a patient with viral load of 3.3 × 106 copies/mL (pooled nasopharyngeal and throat swabs) and 5.9 × 106 copies/mL (saliva), respectively. SARS-CoV-2 was identified in 1 of 13 environmental samples (7.7%) but not in 8 air samples collected at a distance of 10 cm from the patient’s chin with or without wearing a surgical mask.
Appropriate hospital infection control measures was able to prevent nosocomial transmission of SARS-CoV-2.
In the past decade, network analysis (NA) has been applied to psychopathology to quantify complex symptom relationships. This statistical technique has demonstrated much promise, as it provides researchers the ability to identify relationships across many symptoms in one model and can identify central symptoms that may predict important clinical outcomes. However, network models are highly influenced by node selection, which could limit the generalizability of findings. The current study (N = 6850) tests a comprehensive, cognitive–behavioral model of eating-disorder symptoms using items from two, widely used measures (Eating Disorder Examination Questionnaire and Eating Pathology Symptoms Inventory).
We used NA to identify central symptoms and compared networks across the duration of illness (DOI), as chronicity is one of the only known predictors of poor outcome in eating disorders (EDs).
Our results suggest that eating when not hungry and feeling fat were the most central symptoms across groups. There were no significant differences in network structure across DOI, meaning the connections between symptoms remained relatively consistent. However, differences emerged in central symptoms, such that cognitive symptoms related to overvaluation of weight/shape were central in individuals with shorter DOI, and behavioral central symptoms emerged more in medium and long DOI.
Our results have important implications for the treatment of individuals with enduring EDs, as they may have a different core, maintaining symptoms. Additionally, our findings highlight the importance of using comprehensive, theoretically- or empirically-derived models for NA.
The Murchison Widefield Array (MWA) is an open access telescope dedicated to studying the low-frequency (80–300 MHz) southern sky. Since beginning operations in mid-2013, the MWA has opened a new observational window in the southern hemisphere enabling many science areas. The driving science objectives of the original design were to observe 21 cm radiation from the Epoch of Reionisation (EoR), explore the radio time domain, perform Galactic and extragalactic surveys, and monitor solar, heliospheric, and ionospheric phenomena. All together
programs recorded 20 000 h producing 146 papers to date. In 2016, the telescope underwent a major upgrade resulting in alternating compact and extended configurations. Other upgrades, including digital back-ends and a rapid-response triggering system, have been developed since the original array was commissioned. In this paper, we review the major results from the prior operation of the MWA and then discuss the new science paths enabled by the improved capabilities. We group these science opportunities by the four original science themes but also include ideas for directions outside these categories.
At some time during one’s practice in anaesthesiology, one cannot help but notice certain obsessive–compulsive tendencies in our colleagues. Such traits are quickly revealed when you put them under pressure by asking them to do an unplanned emergency case and disrupt the cocoon that is their elective list. In contrast to having known and prepared for all of the patient’s problems, they are now compelled to deal with a relatively unknown and often sub-optimal situation. More likely than not, they will have to induce anaesthesia with rapid sequence induction (RSI). Whereas some may be thrilled, others are less impressed with the disorder introduced into their world. What is it about emergency cases that should be such a bother? In particular, can TIVA enthusiasts thrive in this environment? At the time of writing, the use of TIVA in emergency is indeed somewhat uncharted territory as very few studies have examined this area.
In keeping with the spirit of producing a practical book, we took editorial privileges and removed some of the more detailed text from various chapters and yet felt it would be a waste if some of it weren’t shared with our readers. At the same time, there are aspects of TIVA that are not necessarily recommended for novices but may entice those who have had a bit of experience and want to extend their TIVA repertoire. Therefore we thought we would create a final chapter that would incorporate some such material, hopefully in a semi-logical fashion.
The arrival of versatile, easy-to-use, commercially available, target-controlled drug delivery systems have simplified TIVA making it as simple as using a vaporiser. Most have a choice of PK algorithms. The Marsh and Schnider models are the most commonly used for propofol and have various pros and cons. However, the important point about these models is that they can both make proportional changes in blood concentration allowing easy titration. New data is becoming available for more precise keo and PK that will improve accuracy – and therefore new models are likely to be developed. Remifentanil can also be administered with TCI using the Minto model but, as the pharmacokinetics are relatively simple, can also be delivered as an ordinary infusion (µg.kg−1.min−1). The use of these techniques is discussed elsewhere in the book so here we will concentrate on how to physically set up your TIVA system.
Like many of you, we’re sure, we were trained to use IV anaesthetic agents for induction of anaesthesia but volatiles for maintenance – a sensible and seemingly safe combination that has been used for decades. So why change? The initial attraction of TIVA was the extremely rapid, smooth and clear-headed recovery of patients when using propofol as the hypnotic component of an anaesthetic. This is particularly apparent when the drug is used for cases of short to intermittent duration, for example in day-case surgery with earlier discharge from the post-anaesthetic care unit. Clearly in modern practice, which is moving towards shorter in-patient stays, this represents a major advantage. In addition, improved levels of patient satisfaction occur with TIVA, presumably due to the favourable recovery profile. Certainly, desflurane and sevoflurane allow rapid recovery but it is not as smooth, there may be more emergence delirium and quality indicators are not as good.