We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure coreplatform@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Substantial progress has been made in the standardization of nomenclature for paediatric and congenital cardiac care. In 1936, Maude Abbott published her Atlas of Congenital Cardiac Disease, which was the first formal attempt to classify congenital heart disease. The International Paediatric and Congenital Cardiac Code (IPCCC) is now utilized worldwide and has most recently become the paediatric and congenital cardiac component of the Eleventh Revision of the International Classification of Diseases (ICD-11). The most recent publication of the IPCCC was in 2017. This manuscript provides an updated 2021 version of the IPCCC.
The International Society for Nomenclature of Paediatric and Congenital Heart Disease (ISNPCHD), in collaboration with the World Health Organization (WHO), developed the paediatric and congenital cardiac nomenclature that is now within the eleventh version of the International Classification of Diseases (ICD-11). This unification of IPCCC and ICD-11 is the IPCCC ICD-11 Nomenclature and is the first time that the clinical nomenclature for paediatric and congenital cardiac care and the administrative nomenclature for paediatric and congenital cardiac care are harmonized. The resultant congenital cardiac component of ICD-11 was increased from 29 congenital cardiac codes in ICD-9 and 73 congenital cardiac codes in ICD-10 to 318 codes submitted by ISNPCHD through 2018 for incorporation into ICD-11. After these 318 terms were incorporated into ICD-11 in 2018, the WHO ICD-11 team added an additional 49 terms, some of which are acceptable legacy terms from ICD-10, while others provide greater granularity than the ISNPCHD thought was originally acceptable. Thus, the total number of paediatric and congenital cardiac terms in ICD-11 is 367. In this manuscript, we describe and review the terminology, hierarchy, and definitions of the IPCCC ICD-11 Nomenclature. This article, therefore, presents a global system of nomenclature for paediatric and congenital cardiac care that unifies clinical and administrative nomenclature.
The members of ISNPCHD realize that the nomenclature published in this manuscript will continue to evolve. The version of the IPCCC that was published in 2017 has evolved and changed, and it is now replaced by this 2021 version. In the future, ISNPCHD will again publish updated versions of IPCCC, as IPCCC continues to evolve.
Virtually everything known about Bishop Henry's early life, before he became abbot of Glastonbury in late 1126, derives largely from supposition and legend. It is known that he became ‘a monk of Cluny’, but precisely when, and at what age, remains unclear. How Henry made his way up the Cluniac cursus honorum is equally enigmatic. Once Henry became abbot of Glastonbury, and still more so after his elevation to the episcopate, it would be reasonable to expect his ties with Cluny to have become more tenuous. As it turns out, the contrary seems to be true. His endeavours on behalf of Cluny throughout his career seem to have been considerable. Available is a cryptic reference in his heavily-interpolated ‘will’ to Bishop Henry's early monastic career at Cluny itself, and in more than one of its cells, when he grants access (recursus) to the monks of Winchester ad Cluniacum et ad omnes cellas eiusdem ecclesie unde ego fui monachus. There is also the tradition in a late set of annals preserved in the Red Book of the Exchequer of his having once been prior of Montacute in Somerset, the only direct daughter house in England of Cluny proper, and, if the tradition reported by John Leland is accurate, a possible proto-Reading Abbey. Although there is a possible slot 1120 × 26 for him to have been prior there were plenty of grounds for fabricating a link between Montacute and the bishop.
The evidence is more certain in pointing to Henry spending substantial periods of time at Cluny itself, but for precisely how long remains a matter of interpretation and, on occasions, speculation. Even for so major a political figure as Bishop Henry, there remain striking gaps in his known itinerary. It would appear that he visited Cluny at least five times: the first occasion perhaps in 1134; a second in the winter of 1143/4 when en route for Rome, possibly returning at an unknown date before June 1145; a third probably in the early spring of 1149, again on the way back from Rome; a fourth, and more-lengthy, visit for approximately two years in mid-1155; and a final visit in 1160/1. This chapter will revisit the evidence for these trips as a means of exploring Henry's often overlooked relationship with Cluny.
Much of our current understanding about novel coronavirus disease 2019 (COVID-19) comes from hospitalised patients. However, the spectrum of mild and subclinical disease has implications for population-level screening and control. Forty-nine participants were recruited from a group of 99 adults repatriated from a cruise ship with a high incidence of COVID-19. Respiratory and rectal swabs were tested by polymerase chain reaction (PCR) for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Sera were tested for anti-SARS-CoV-2 antibodies by enzyme-linked immunosorbent assay (ELISA) and microneutralisation assay. Symptoms, viral shedding and antibody response were examined. Forty-five participants (92%) were considered cases based on either positive PCR or positive ELISA for immunoglobulin G. Forty-two percent of cases were asymptomatic. Only 15% of symptomatic cases reported fever. Serial respiratory and rectal swabs were positive for 10% and 5% of participants respectively about 3 weeks after median symptom onset. Cycle threshold values were high (range 31–45). Attempts to isolate live virus were unsuccessful. The presence of symptoms was not associated with demographics, comorbidities or antibody response. In closed settings, incidence of COVID-19 could be almost double that suggested by symptom-based screening. Serology may be useful in diagnosis of mild disease and in aiding public health investigations.
Despite the rapid growth of online political advertising, the vast majority of scholarship on political advertising relies exclusively on evidence from candidates’ television advertisements. The relatively low cost of creating and deploying online advertisements and the ability to target online advertisements more precisely may broaden the set of candidates who advertise and allow candidates to craft messages to more narrow audiences than on television. Drawing on data from the newly released Facebook Ad Library API and television data from the Wesleyan Media Project, we find that a much broader set of candidates advertises on Facebook than television, particularly in down-ballot races. We then examine within-candidate variation in the strategic use and content of advertising on television relative to Facebook for all federal, statewide, and state legislative candidates in the 2018 election. Among candidates who use both advertising media, Facebook advertising occurs earlier in the campaign, is less negative, less issue focused, and more partisan than television advertising.
Frascati international research criteria for HIV-associated neurocognitive disorders (HAND) are controversial; some investigators have argued that Frascati criteria are too liberal, resulting in a high false positive rate. Meyer et al. recommended more conservative revisions to HAND criteria, including exploring other commonly used methodologies for neurocognitive impairment (NCI) in HIV including the global deficit score (GDS). This study compares NCI classifications by Frascati, Meyer, and GDS methods, in relation to neuroimaging markers of brain integrity in HIV.
Method:
Two hundred forty-one people living with HIV (PLWH) without current substance use disorder or severe (confounding) comorbid conditions underwent comprehensive neurocognitive testing and brain structural magnetic resonance imaging and magnetic resonance spectroscopy. Participants were classified using Frascati criteria versus Meyer criteria: concordant unimpaired [Frascati(Un)/Meyer(Un)], concordant impaired [Frascati(Imp)/Meyer(Imp)], or discordant [Frascati(Imp)/Meyer(Un)] which were impaired via Frascati criteria but unimpaired via Meyer criteria. To investigate the GDS versus Meyer criteria, the same groupings were utilized using GDS criteria instead of Frascati criteria.
Results:
When examining Frascati versus Meyer criteria, discordant Frascati(Imp)/Meyer(Un) individuals had less cortical gray matter, greater sulcal cerebrospinal fluid volume, and greater evidence of neuroinflammation (i.e., choline) than concordant Frascati(Un)/Meyer(Un) individuals. GDS versus Meyer comparisons indicated that discordant GDS(Imp)/Meyer(Un) individuals had less cortical gray matter and lower levels of energy metabolism (i.e., creatine) than concordant GDS(Un)/Meyer(Un) individuals. In both sets of analyses, the discordant group did not differ from the concordant impaired group on any neuroimaging measure.
Conclusions:
The Meyer criteria failed to capture a substantial portion of PLWH with brain abnormalities. These findings support continued use of Frascati or GDS criteria to detect HIV-associated CNS dysfunction.
Natural disasters often damage or destroy the protective public health service infrastructure (PHI) required to maintain the health and well-being of people with noncommunicable diseases (NCDs). This interruption increases the risk of an acute exacerbation or complication, potentially leading to a worse long-term prognosis or even death. Disaster-related exacerbations of NCDs will continue, if not increase, due to an increasing prevalence and sustained rise in the frequency and intensity of disasters, along with rapid unsustainable urbanization in flood plains and storm-prone coastal zones. Despite this, the focus of disaster and health systems preparedness and response remains on communicable diseases, even when the actual risk of disease outbreaks post-disaster is low, particularly in developed countries. There is now an urgent need to expand preparedness and response beyond communicable diseases to include people with NCDs.
Hypothesis/Problem:
The developing evidence-base describing the risk of disaster-related exacerbation of NCDs does not incorporate the perspectives, concerns, and challenges of people actually living with the conditions. To help address this gap, this research explored the key influences on patient ability to successfully manage their NCD after a natural disaster.
Methods:
A survey of people with NCDs in Queensland, Australia collected data on demographics, disease, disaster experience, and primary concern post-disaster. Descriptive statistics and chi-square tests with a Bonferroni-adjustment were used to analyze data.
Results:
There were 118 responses to the survey. Key influences on the ability to self-manage post-disaster were access to medication, medical services, water, treatment and care, power, and food. Managing disease-specific symptoms associated with cardiovascular disease, diabetes, mental health, and respiratory diseases were primary concerns following a disaster. Stress and anxiety, loss of sleep, weakness or fatigue, and shortness of breath were common concerns for all patients with NCDs. Those dependent on care from others were most worried about shortness of breath and slow healing sores. Accessing medication and medical services were priorities for all patients post-disaster.
Conclusion:
The key influences on successful self-management post-disaster for people with NCDs must be reflected in disaster plans and strategies. Achieving this will reduce exacerbations or complications of disease and decrease demand for emergency health care post-disaster.
Objectives: Studies of neurocognitively elite older adults, termed SuperAgers, have identified clinical predictors and neurobiological indicators of resilience against age-related neurocognitive decline. Despite rising rates of older persons living with HIV (PLWH), SuperAging (SA) in PLWH remains undefined. We aimed to establish neuropsychological criteria for SA in PLWH and examined clinically relevant correlates of SA. Methods: 734 PLWH and 123 HIV-uninfected participants between 50 and 64 years of age underwent neuropsychological and neuromedical evaluations. SA was defined as demographically corrected (i.e., sex, race/ethnicity, education) global neurocognitive performance within normal range for 25-year-olds. Remaining participants were labeled cognitively normal (CN) or impaired (CI) based on actual age. Chi-square and analysis of variance tests examined HIV group differences on neurocognitive status and demographics. Within PLWH, neurocognitive status differences were tested on HIV disease characteristics, medical comorbidities, and everyday functioning. Multinomial logistic regression explored independent predictors of neurocognitive status. Results: Neurocognitive status rates and demographic characteristics differed between PLWH (SA=17%; CN=38%; CI=45%) and HIV-uninfected participants (SA=35%; CN=55%; CI=11%). In PLWH, neurocognitive groups were comparable on demographic and HIV disease characteristics. Younger age, higher verbal IQ, absence of diabetes, fewer depressive symptoms, and lifetime cannabis use disorder increased likelihood of SA. SA reported increased independence in everyday functioning, employment, and health-related quality of life than non-SA. Conclusions: Despite combined neurological risk of aging and HIV, youthful neurocognitive performance is possible for older PLWH. SA relates to improved real-world functioning and may be better explained by cognitive reserve and maintenance of cardiometabolic and mental health than HIV disease severity. Future research investigating biomarker and lifestyle (e.g., physical activity) correlates of SA may help identify modifiable neuroprotective factors against HIV-related neurobiological aging. (JINS, 2019, 25, 507–519)
An internationally approved and globally used classification scheme for the diagnosis of CHD has long been sought. The International Paediatric and Congenital Cardiac Code (IPCCC), which was produced and has been maintained by the International Society for Nomenclature of Paediatric and Congenital Heart Disease (the International Nomenclature Society), is used widely, but has spawned many “short list” versions that differ in content depending on the user. Thus, efforts to have a uniform identification of patients with CHD using a single up-to-date and coordinated nomenclature system continue to be thwarted, even if a common nomenclature has been used as a basis for composing various “short lists”. In an attempt to solve this problem, the International Nomenclature Society has linked its efforts with those of the World Health Organization to obtain a globally accepted nomenclature tree for CHD within the 11th iteration of the International Classification of Diseases (ICD-11). The International Nomenclature Society has submitted a hierarchical nomenclature tree for CHD to the World Health Organization that is expected to serve increasingly as the “short list” for all communities interested in coding for congenital cardiology. This article reviews the history of the International Classification of Diseases and of the IPCCC, and outlines the process used in developing the ICD-11 congenital cardiac disease diagnostic list and the definitions for each term on the list. An overview of the content of the congenital heart anomaly section of the Foundation Component of ICD-11, published herein in its entirety, is also included. Future plans for the International Nomenclature Society include linking again with the World Health Organization to tackle procedural nomenclature as it relates to cardiac malformations. By doing so, the Society will continue its role in standardising nomenclature for CHD across the globe, thereby promoting research and better outcomes for fetuses, children, and adults with congenital heart anomalies.
Objectives: Human immunodeficiency virus (HIV) disproportionately affects Hispanics/Latinos in the United States, yet little is known about neurocognitive impairment (NCI) in this group. We compared the rates of NCI in large well-characterized samples of HIV-infected (HIV+) Latinos and (non-Latino) Whites, and examined HIV-associated NCI among subgroups of Latinos. Methods: Participants included English-speaking HIV+ adults assessed at six U.S. medical centers (194 Latinos, 600 Whites). For overall group, age: M=42.65 years, SD=8.93; 86% male; education: M=13.17, SD=2.73; 54% had acquired immunodeficiency syndrome. NCI was assessed with a comprehensive test battery with normative corrections for age, education and gender. Covariates examined included HIV-disease characteristics, comorbidities, and genetic ancestry. Results: Compared with Whites, Latinos had higher rates of global NCI (42% vs. 54%), and domain NCI in executive function, learning, recall, working memory, and processing speed. Latinos also fared worse than Whites on current and historical HIV-disease characteristics, and nadir CD4 partially mediated ethnic differences in NCI. Yet, Latinos continued to have more global NCI [odds ratio (OR)=1.59; 95% confidence interval (CI)=1.13–2.23; p<.01] after adjusting for significant covariates. Higher rates of global NCI were observed with Puerto Rican (n=60; 71%) versus Mexican (n=79, 44%) origin/descent; this disparity persisted in models adjusting for significant covariates (OR=2.40; CI=1.11–5.29; p=.03). Conclusions: HIV+ Latinos, especially of Puerto Rican (vs. Mexican) origin/descent had increased rates of NCI compared with Whites. Differences in rates of NCI were not completely explained by worse HIV-disease characteristics, neurocognitive comorbidities, or genetic ancestry. Future studies should explore culturally relevant psychosocial, biomedical, and genetic factors that might explain these disparities and inform the development of targeted interventions. (JINS, 2018, 24, 163–175)
The University of Georgia (USA) is partnering with the University of Padova (Italy) for a dual Master’s degree program in sustainable agriculture, promoting collaboration on some of the biggest challenges facing agriculture today. This innovative program which was launched during 2016 provides students with outstanding training and a unique opportunity to learn about the challenges, opportunities, and leading edges of precision agriculture on another continent – an experience which will serve graduates well when they enter the job market in an increasingly global economy. This paper presents the goals of the program, the curriculum, and describes the opportunities available to prospective students. In addition it describes the process of developing the dual degree which can be used as guide by others wishing to develop similar programs.
Commercial introduction of cultivars of soybean and cotton genetically modified with resistance to the synthetic auxin herbicides dicamba and 2,4-D will allow these compounds to be used with greater flexibility but may expose susceptible soybean and cotton cultivars to nontarget herbicide drift. From past experience, it is well known that soybean and cotton are both highly sensitive to low-dose exposures of dicamba and 2,4-D. In this study, a meta-analysis approach was used to synthesize data from over seven decades of simulated drift experiments in which investigators treated soybean and cotton with low doses of dicamba and 2,4-D and measured the resulting yields. These data were used to produce global dose–response curves for each crop and herbicide, with crop yield plotted against herbicide dose. The meta-analysis showed that soybean is more susceptible to dicamba in the flowering stage and relatively tolerant to 2,4-D at all growth stages. Conversely, cotton is tolerant to dicamba but extremely sensitive to 2,4-D, especially in the vegetative and preflowering squaring stages. Both crops are highly variable in their responses to synthetic auxin herbicide exposure, with soil moisture and air temperature at the time of exposure identified as key factors. Visual injury symptoms, especially during vegetative stages, are not predictive of final yield loss. Global dose–response curves generated by this meta-analysis can inform guidelines for herbicide applications and provide producers and agricultural professionals with a benchmark of the mean and range of crop yield loss that can be expected from drift or other nontarget exposures to 2,4-D or dicamba.
To determine the length and position of a thyroidectomy scar that is cosmetically most appealing to naïve raters.
Methods:
Images of thyroidectomy scars were reproduced on male and female necks using digital imaging software. Surgical variables studied were scar position and length. Fifteen raters were presented with 56 scar pairings and asked to identify which was preferred cosmetically. Twenty duplicate pairings were included to assess rater reliability. Analysis of variance was used to determine preference.
Results:
Raters preferred low, short scars, followed by high, short scars, with long scars in either position being less desirable (p < 0.05). Twelve of 15 raters had acceptable intra-rater and inter-rater reliability.
Conclusion:
Naïve raters preferred low, short scars over the alternatives. High, short scars were the next most favourably rated. If other factors influencing incision choice are considered equal, surgeons should consider these preferences in scar position and length when planning their thyroidectomy approach.
The study aim was to undertake a qualitative research literature review to analyze available databases to define, describe, and categorize public health infrastructure (PHI) priorities for tropical cyclone, flood, storm, tornado, and tsunami-related disasters.
Methods
Five electronic publication databases were searched to define, describe, or categorize PHI and discuss tropical cyclone, flood, storm, tornado, and tsunami-related disasters and their impact on PHI. The data were analyzed through aggregation of individual articles to create an overall data description. The data were grouped into PHI themes, which were then prioritized on the basis of degree of interdependency.
Results
Sixty-seven relevant articles were identified. PHI was categorized into 13 themes with a total of 158 descriptors. The highest priority PHI identified was workforce. This was followed by water, sanitation, equipment, communication, physical structure, power, governance, prevention, supplies, service, transport, and surveillance.
Conclusions
This review identified workforce as the most important of the 13 thematic areas related to PHI and disasters. If its functionality fails, workforce has the greatest impact on the performance of health services. If addressed post-disaster, the remaining forms of PHI will then be progressively addressed. These findings are a step toward providing an evidence base to inform PHI priorities in the disaster setting. (Disaster Med Public Health Preparedness. 2016;10:598–610)
A recent outbreak of Q fever was linked to an intensive goat and sheep dairy farm in Victoria, Australia, 2012-2014. Seventeen employees and one family member were confirmed with Q fever over a 28-month period, including two culture-positive cases. The outbreak investigation and management involved a One Health approach with representation from human, animal, environmental and public health. Seroprevalence in non-pregnant milking goats was 15% [95% confidence interval (CI) 7–27]; active infection was confirmed by positive quantitative PCR on several animal specimens. Genotyping of Coxiella burnetii DNA obtained from goat and human specimens was identical by two typing methods. A number of farming practices probably contributed to the outbreak, with similar precipitating factors to the Netherlands outbreak, 2007-2012. Compared to workers in a high-efficiency particulate arrestance (HEPA) filtered factory, administrative staff in an unfiltered adjoining office and those regularly handling goats and kids had 5·49 (95% CI 1·29–23·4) and 5·65 (95% CI 1·09–29·3) times the risk of infection, respectively; suggesting factory workers were protected from windborne spread of organisms. Reduction in the incidence of human cases was achieved through an intensive human vaccination programme plus environmental and biosecurity interventions. Subsequent non-occupational acquisition of Q fever in the spouse of an employee, indicates that infection remains endemic in the goat herd, and remains a challenge to manage without source control.
A history of self-injurious thoughts and behaviors (SITBs) is consistently cited as one of the strongest predictors of future suicidal behavior. However, stark discrepancies in the literature raise questions about the true magnitude of these associations. The objective of this study is to examine the magnitude and clinical utility of the associations between SITBs and subsequent suicide ideation, attempts, and death.
Method
We searched PubMed, PsycInfo, and Google Scholar for papers published through December 2014. Inclusion required that studies include at least one longitudinal analysis predicting suicide ideation, attempts, or death using any SITB variable. We identified 2179 longitudinal studies; 172 met inclusion criteria.
Results
The most common outcome was suicide attempt (47.80%), followed by death (40.50%) and ideation (11.60%). Median follow-up was 52 months (mean = 82.52, s.d. = 102.29). Overall prediction was weak, with weighted mean odds ratios (ORs) of 2.07 [95% confidence interval (CI) 1.76–2.43] for ideation, 2.14 (95% CI 2.00–2.30) for attempts, and 1.54 (95% CI 1.39–1.71) for death. Adjusting for publication bias further reduced estimates. Diagnostic accuracy analyses indicated acceptable specificity (86–87%) and poor sensitivity (10–26%), with areas under the curve marginally above chance (0.60–0.62). Most risk factors generated OR estimates of <2.0 and no risk factor exceeded 4.5. Effects were consistent regardless of sample severity, sample age groups, or follow-up length.
Conclusions
Prior SITBs confer risk for later suicidal thoughts and behaviors. However, they only provide a marginal improvement in diagnostic accuracy above chance. Addressing gaps in study design, assessment, and underlying mechanisms may prove useful in improving prediction and prevention of suicidal thoughts and behaviors.