Due to unplanned maintenance of the back-end systems supporting article purchase on Cambridge Core, we have taken the decision to temporarily suspend article purchase for the foreseeable future. We apologise for any inconvenience caused whilst we work with the relevant teams to restore this service.
To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This 17-year prospective study applied a social-developmental lens to the challenge of distinguishing predictors of adolescent-era substance use from predictors of longer term adult substance use problems. A diverse community sample of 168 individuals was repeatedly assessed from age 13 to age 30 using test, self-, parent-, and peer-report methods. As hypothesized, substance use within adolescence was linked to a range of likely transient social and developmental factors that are particularly salient during the adolescent era, including popularity with peers, peer substance use, parent–adolescent conflict, and broader patterns of deviant behavior. Substance abuse problems at ages 27–30 were best predicted, even after accounting for levels of substance use in adolescence, by adolescent-era markers of underlying deficits, including lack of social skills and poor self-concept. The factors that best predicted levels of adolescent-era substance use were not generally predictive of adult substance abuse problems in multivariate models (either with or without accounting for baseline levels of use). Results are interpreted as suggesting that recognizing the developmental nature of adolescent-era substance use may be crucial to distinguishing factors that predict socially driven and/or relatively transient use during adolescence from factors that predict long-term problems with substance abuse that extend well into adulthood.
Although the Peritraumatic Distress Inventory (PDI) and Peritraumatic Dissociative Experiences Questionnaire (PDEQ) are both useful for identifying adults at risk of developing acute and chronic post-traumatic stress disorder (PTSD), they have not been validated in school-aged children. The present study aims at assessing the psychometric properties of the PDI and PDEQ in a sample of French-speaking school children.
One-hundred and thirty-three school-aged victims of road traffic accidents were consecutively enrolled into this study via the emergency room. Mean(SD) age was 11.7(2.2) and 56.4% (n=75) of them were of male gender. The 13-item self-report PDI (range 0-52) and the 10-item self report PDEQ (range 10-50) were assessed within one week of the accident. Symptoms of PTSD were assessed 1 and 6 months later using the 20-item self-report Child Post-Traumatic Stress Reaction Index (CPTS-RI) (range 0-80).
Mean(SD) PDI and PDEQ scores were 19.1(10.1) and 21.1(7.6), respectively, while mean(SD) CPTS-RI scores at 1- and 6-months were 22.6(12.4) and 20.6(13.5), respectively. Cronbach's alpha coefficients were 0.8 and 0.77 for the PDI and PDEQ, respectively. The 1-month test-retest correlation coefficient (n=33) was 0.77 for both measures. The PDI demonstrated a 2-factor structure while the PDEQ displayed a 1-factor structure. As with adults, the two measures were inter-correlated (r=0.52) and correlated with subsequent PTSD symptoms (r=0.21−0.56; p< 0.05).
The PDI and PDEQ are reliable and valid in school-aged children, and predict PTSD symptoms.
It remains unknown whether peritraumatic reactions predict PTSD symptoms in younger populations. To prospectively investigated the power of self-reported peritraumatic distress and dissociation to predict the development of PTSD symptoms at 1-month in school-aged children.
A sample of 103 school-aged children (8-15 years old) admitted to an Emergency Department after a road traffic accident were consecutively enrolled. Peritraumatic distress was assessed using the Peritraumatic Distress Inventory (range 0-52) and peritraumatic dissociation was assessed using the Peritraumatic Dissociative Experiences Questionnaire (PDEQ) (range 10-50). PTSD symptoms were measured at 1-month by both the child version of the clinician-administered PTSD Scale (CAPS-CA) (range: 0-136) and the Child Post-traumatic Stress Reaction Index (CPTS-RI) (range 0-80).
Mean(SD) participants’ age was 11.7(2.2) and 53.4% (n=55) of them were of male gender. At baseline, mean PDI and PDEQ scores were 21.4 (SD=7.8) and 19.2 (SD=10.2), respectively. At 1-month, mean self-reported (CPTS-RI) and interviewer-based (CAPS-CA) PTSD symptom scores were 23.2 (SD=12.1) and 19 (SD=16.9), respectively. According to the CAPS-CA, 5 children (4.9%) suffered from full PTSD. Bivariate analyses demonstrated a significant association between peritraumatic variables (PDI and PDEQ) and both CAPS-CA and CPTS-RI (r=0.22-0.57; all p< 0.05). However, in a multivariate analysis, PDI was the only significant predictor of acute PTSD symptoms (Beta=0.33, p< 0.05).
As has been found in adults, peritraumatic distress is a robust predictor of who will develop PTSD symptoms among school-aged children.
Basic Self disturbances (BSD), including changes of the 'pre-reflexive' sense of self and the loss first-person perspective, are characteristic of the schizophrenic spectrum disorders and highly prevalent in subjects at 'ultra high risk' for psychosis (UHR). The current literature indicates that cortical midline structures (CMS) may be implicated in the neurobiological substrates of the 'basic self' in healthy controls.
Neuroanatomical investigation of BSD in a UHR sample
To test the hypotheses :(i) UHR subjects have higher 'Examination of Anomalous Self Experience, EASE' scores as compared to controls, (ii) UHR subjects have neuroanatomical alterations as compared to controls in CMS, (iii) within UHR subjects, EASE scores are directly related to structural CMS alterations.
32 HR subjects (27 antipsychotics-naïve) and 17 healthy controls (HC) were assessed with the 57-items semi-structured EASE interview. Voxel-Based Morphometry (VBM) was conducted in the same subjects, with a-priori Region of Interests (ROIs) defined in the CMS (anterior/posterior cingulate and medial-prefrontal cortex).
Despite high variability in the HR group, the overall EASE score was higher (t-test >0.01, Cohen's d =2.91) in HR (mean=30.15, SD=16.46) as compared to HC group (mean=1.79, SD=2.83). UHR subjects had gray matter reduction in CMS as compared to HC (p>0.05 FWE-corrected). Across the whole sample, lower gray matter volume in the anterior cingulate was correlated with higher EASE scores (p>0.05).
This study provides preliminary evidence that gray matter reductions in the CMS are correlated with BSD in UHR people.
Yukon Territory (YT) is a remote region in northern Canada with ongoing spread of tuberculosis (TB). To explore the utility of whole genome sequencing (WGS) for TB surveillance and monitoring in a setting with detailed contact tracing and interview data, we used a mixed-methods approach. Our analysis included all culture-confirmed cases in YT (2005–2014) and incorporated data from 24-locus Mycobacterial Interspersed Repetitive Units-Variable Number of Tandem Repeats (MIRU-VNTR) genotyping, WGS and contact tracing. We compared field-based (contact investigation (CI) data + MIRU-VNTR) and genomic-based (WGS + MIRU-VNTR + basic case data) investigations to identify the most likely source of each person's TB and assessed the knowledge, attitudes and practices of programme personnel around genotyping and genomics using online, multiple-choice surveys (n = 4) and an in-person group interview (n = 5). Field- and genomics-based approaches agreed for 26 of 32 (81%) cases on likely location of TB acquisition. There was less agreement in the identification of specific source cases (13/22 or 59% of cases). Single-locus MIRU-VNTR variants and limited genetic diversity complicated the analysis. Qualitative data indicated that participants viewed genomic epidemiology as a useful tool to streamline investigations, particularly in differentiating latent TB reactivation from the recent transmission. Based on this, genomic data could be used to enhance CIs, focus resources, target interventions and aid in TB programme evaluation.
Adolescent association with deviant and delinquent friends was examined for its roots in coercive parent–teen interactions and its links to functional difficulties extending beyond delinquent behavior and into adulthood. A community sample of 184 adolescents was followed from age 13 to age 27, with collateral data obtained from close friends, classmates, and parents. Even after accounting for adolescent levels of delinquent and deviant behavior, association with deviant friends was predicted by coercive parent–teen interactions and then linked to declining functioning with peers during adolescence and greater internalizing and externalizing symptoms and poorer overall adjustment in adulthood. Results are interpreted as suggesting that association with deviant friends may disrupt a core developmental task—establishing positive relationships with peers—with implications that extend well beyond deviancy-training effects.
To evaluate the association between novel pre- and post-operative biomarker levels and 30-day unplanned readmission or mortality after paediatric congenital heart surgery.
Children aged 18 years or younger undergoing congenital heart surgery (n = 162) at Johns Hopkins Hospital from 2010 to 2014 were enrolled in the prospective cohort. Collected novel pre- and post-operative biomarkers include soluble suppression of tumorgenicity 2, galectin-3, N-terminal prohormone of brain natriuretic peptide, and glial fibrillary acidic protein. A model based on clinical variables from the Society of Thoracic Surgery database was developed and evaluated against two augmented models.
Unplanned readmission or mortality within 30 days of cardiac surgery occurred among 21 (13%) children. The clinical model augmented with pre-operative biomarkers demonstrated a statistically significant improvement over the clinical model alone with a receiver-operating characteristics curve of 0.754 (95% confidence interval: 0.65–0.86) compared to 0.617 (95% confidence interval: 0.47–0.76; p-value: 0.012). The clinical model augmented with pre- and post-operative biomarkers demonstrated a significant improvement over the clinical model alone, with a receiver-operating characteristics curve of 0.802 (95% confidence interval: 0.72–0.89; p-value: 0.003).
Novel biomarkers add significant predictive value when assessing the likelihood of unplanned readmission or mortality after paediatric congenital heart surgery. Further exploration of the utility of these novel biomarkers during the pre- or post-operative period to identify early risk of mortality or readmission will aid in determining the clinical utility and application of these biomarkers into routine risk assessment.
Few studies have used genomic epidemiology to understand tuberculosis (TB) transmission in rural and remote settings – regions often unique in history, geography and demographics. To improve our understanding of TB transmission dynamics in Yukon Territory (YT), a circumpolar Canadian territory, we conducted a retrospective analysis in which we combined epidemiological data collected through routine contact investigations with clinical and laboratory results. Mycobacterium tuberculosis isolates from all culture-confirmed TB cases in YT (2005–2014) were genotyped using 24-locus Mycobacterial Interspersed Repetitive Units-Variable Number of Tandem Repeats (MIRU-VNTR) and compared to each other and to those from the neighbouring province of British Columbia (BC). Whole genome sequencing (WGS) of genotypically clustered isolates revealed three sustained transmission networks within YT, two of which also involved BC isolates. While each network had distinct characteristics, all had at least one individual acting as the probable source of three or more culture-positive cases. Overall, WGS revealed that TB transmission dynamics in YT are distinct from patterns of spread in other, more remote Northern Canadian regions, and that the combination of WGS and epidemiological data can provide actionable information to local public health teams.
The Meat Standards Australia (MSA) grading scheme has the ability to predict beef eating quality for each ‘cut×cooking method combination’ from animal and carcass traits such as sex, age, breed, marbling, hot carcass weight and fatness, ageing time, etc. Following MSA testing protocols, a total of 22 different muscles, cooked by four different cooking methods and to three different degrees of doneness, were tasted by over 19 000 consumers from Northern Ireland, Poland, Ireland, France and Australia. Consumers scored the sensory characteristics (tenderness, flavor liking, juiciness and overall liking) and then allocated samples to one of four quality grades: unsatisfactory, good-every-day, better-than-every-day and premium. We observed that 26% of the beef was unsatisfactory. As previously reported, 68% of samples were allocated to the correct quality grades using the MSA grading scheme. Furthermore, only 7% of the beef unsatisfactory to consumers was misclassified as acceptable. Overall, we concluded that an MSA-like grading scheme could be used to predict beef eating quality and hence underpin commercial brands or labels in a number of European countries, and possibly the whole of Europe. In addition, such an eating quality guarantee system may allow the implementation of an MSA genetic index to improve eating quality through genetics as well as through management. Finally, such an eating quality guarantee system is likely to generate economic benefits to be shared along the beef supply chain from farmers to retailors, as consumers are willing to pay more for a better quality product.
Struggles managing conflict and hostility in adolescent social relationships were examined as long-term predictors of immune-mediated inflammation in adulthood that has been linked to long-term health outcomes. Circulating levels of interleukin-6 (IL-6), a marker of immune system dysfunction when chronically elevated, were assessed at age 28 in a community sample of 127 individuals followed via multiple methods and reporters from ages 13 to 28. Adult serum IL-6 levels were predicted across periods as long as 15 years by adolescents’ inability to defuse peer aggression and poor peer-rated conflict resolution skills, and by independently observed romantic partner hostility in late adolescence. Adult relationship difficulties also predicted higher IL-6 levels but did not mediate predictions from adolescent-era conflict struggles. Predictions were also not mediated by adult trait hostility or aggressive behavior, suggesting the unique role of struggles with conflict and hostility from others during adolescence. The implications for understanding the import of adolescent peer relationships for life span physical health outcomes are considered.
Paediatric hospital-associated venous thromboembolism is a leading quality and safety concern at children’s hospitals.
The aim of this study was to determine risk factors for hospital-associated venous thromboembolism in critically ill children following cardiothoracic surgery or therapeutic cardiac catheterisation.
We conducted a retrospective, case–control study of children admitted to the cardiovascular intensive care unit at Johns Hopkins All Children’s Hospital (St. Petersburg, Florida, United States of America) from 2006 to 2013. Hospital-associated venous thromboembolism cases were identified based on ICD-9 discharge codes and validated using radiological record review. We randomly selected two contemporaneous cardiovascular intensive care unit controls without hospital-associated venous thromboembolism for each hospital-associated venous thromboembolism case, and limited the study population to patients who had undergone cardiothoracic surgery or therapeutic cardiac catheterisation. Odds ratios and 95% confidence intervals for associations between putative risk factors and hospital-associated venous thromboembolism were determined using univariate and multivariate logistic regression.
Among 2718 admissions to the cardiovascular intensive care unit during the study period, 65 met the criteria for hospital-associated venous thromboembolism (occurrence rate, 2%). Restriction to cases and controls having undergone the procedures of interest yielded a final study population of 57 hospital-associated venous thromboembolism cases and 76 controls. In a multiple logistic regression model, major infection (odds ratio=5.77, 95% confidence interval=1.06–31.4), age ⩽1 year (odds ratio=6.75, 95% confidence interval=1.13–160), and central venous catheterisation (odds ratio=7.36, 95% confidence interval=1.13–47.8) were found to be statistically significant independent risk factors for hospital-associated venous thromboembolism in these children. Patients with all three factors had a markedly increased post-test probability of having hospital-associated venous thromboembolism.
Major infection, infancy, and central venous catheterisation are independent risk factors for hospital-associated venous thromboembolism in critically ill children following cardiothoracic surgery or cardiac catheter-based intervention, which, in combination, define a high-risk group for hospital-associated venous thromboembolism.
Objectives: The aim of this study was to identify whether the three main primary progressive aphasia (PPA) variants would show differential profiles on measures of visuospatial cognition. We hypothesized that the logopenic variant would have the most difficulty across tasks requiring visuospatial and visual memory abilities. Methods: PPA patients (n=156), diagnosed using current criteria, and controls were tested on a battery of tests tapping different aspects of visuospatial cognition. We compared the groups on an overall visuospatial factor; construction, immediate recall, delayed recall, and executive functioning composites; and on individual tests. Cross-sectional and longitudinal comparisons were made, adjusted for disease severity, age, and education. Results: The logopenic variant had significantly lower scores on the visuospatial factor and the most impaired scores on all composites. The nonfluent variant had significant difficulty on all visuospatial composites except the delayed recall, which differentiated them from the logopenic variant. In contrast, the semantic variants performed poorly only on delayed recall of visual information. The logopenic and nonfluent variants showed decline in figure copying performance over time, whereas in the semantic variant, this skill was remarkably preserved. Conclusions: This extensive examination of performance on visuospatial tasks in the PPA variants solidifies some previous findings, for example, delayed recall of visual stimuli adds value in differential diagnosis between logopenic variant PPA and nonfluent variant PPA variants, and illuminates the possibility of common mechanisms that underlie both linguistic and non-linguistic deficits in the variants. Furthermore, this is the first study that has investigated visuospatial functioning over time in the PPA variants. (JINS, 2018, 24, 259–268)
Accurately quantifying a consumer’s willingness to pay (WTP) for beef of different eating qualities is intrinsically linked to the development of eating-quality-based meat grading systems, and therefore the delivery of consistent, quality beef to the consumer. Following Australian MSA (Meat Standards Australia) testing protocols, over 19 000 consumers from Northern Ireland, Poland, Ireland, France and Australia were asked to detail their willingness to pay for beef from one of four categories that best described the sample; unsatisfactory, good-every-day, better-than-every-day or premium quality. These figures were subsequently converted to a proportion relative to the good-every-day category (P-WTP) to allow comparison between different currencies and time periods. Consumers also answered a short demographic questionnaire. Consumer P-WTP was found to be remarkably consistent between different demographic groups. After quality grade, by far the greatest influence on P-WTP was country of origin. This difference was unable to be explained by the other demographic factors examined in this study, such as occupation, gender, frequency of consumption and the importance of beef in the diet. Therefore, we can conclude that the P-WTP for beef is highly transferrable between different consumer groups, but not countries.
A legionellosis outbreak at an industrial site was investigated to identify and control the source. Cases were identified from disease notifications, workplace illness records, and from clinicians. Cases were interviewed for symptoms and risk factors and tested for legionellosis. Implicated environmental sources were sampled and tested for legionella. We identified six cases with Legionnaires’ disease and seven with Pontiac fever; all had been exposed to aerosols from the cooling towers on the site. Nine cases had evidence of infection with either Legionella pneumophila serogroup (sg) 1 or Legionella longbeachae sg1; these organisms were also isolated from the cooling towers. There was 100% DNA sequence homology between cooling tower and clinical isolates of L. pneumophila sg1 using sequence-based typing analysis; no clinical L. longbeachae isolates were available to compare with environmental isolates. Routine monitoring of the towers prior to the outbreak failed to detect any legionella. Data from this outbreak indicate that L. pneumophila sg1 transmission occurred from the cooling towers; in addition, L. longbeachae transmission was suggested but remains unproven. L. longbeachae detection in cooling towers has not been previously reported in association with legionellosis outbreaks. Waterborne transmission should not be discounted in investigations for the source of L. longbeachae infection.
The beef industry must become more responsive to the changing market place and consumer demands. An essential part of this is quantifying a consumer’s perception of the eating quality of beef and their willingness to pay for that quality, across a broad range of demographics. Over 19 000 consumers from Northern Ireland, Poland, Ireland and France each tasted seven beef samples and scored them for tenderness, juiciness, flavour liking and overall liking. These scores were weighted and combined to create a fifth score, termed the Meat Quality 4 score (MQ4) (0.3×tenderness, 0.1×juiciness, 0.3×flavour liking and 0.3×overall liking). They also allocated the beef samples into one of four quality grades that best described the sample; unsatisfactory, good-every-day, better-than-every-day or premium. After the completion of the tasting panel, consumers were then asked to detail, in their own currency, their willingness to pay for these four categories which was subsequently converted to a proportion relative to the good-every-day category (P-WTP). Consumers also answered a short demographic questionnaire. The four sensory scores, the MQ4 score and the P-WTP were analysed separately, as dependant variables in linear mixed effects models. The answers from the demographic questionnaire were included in the model as fixed effects. Overall, there were only small differences in consumer scores and P-WTP between demographic groups. Consumers who preferred their beef cooked medium or well-done scored beef higher, except in Poland, where the opposite trend was found. This may be because Polish consumers were more likely to prefer their beef cooked well-done, but samples were cooked medium for this group. There was a small positive relationship with the importance of beef in the diet, increasing sensory scores by about 4% in Poland and Northern Ireland. Men also scored beef about 2% higher than women for most sensory scores in most countries. In most countries, consumers were willing to pay between 150 and 200% more for premium beef, and there was a 50% penalty in value for unsatisfactory beef. After quality grade, by far the greatest influence on P-WTP was country of origin. Consumer age also had a small negative relationship with P-WTP. The results indicate that a single quality score could reliably describe the eating quality experienced by all consumers. In addition, if reliable quality information is delivered to consumers they will pay more for better quality beef, which would add value to the beef industry and encourage improvements in quality.
Quantifying consumer responses to beef across a broad range of demographics, nationalities and cooking methods is vitally important for any system evaluating beef eating quality. On the basis of previous work, it was expected that consumer scores would be highly accurate in determining quality grades for beef, thereby providing evidence that such a technique could be used to form the basis of and eating quality grading system for beef. Following the Australian MSA (Meat Standards Australia) testing protocols, over 19 000 consumers from Northern Ireland, Poland, Ireland, France and Australia tasted cooked beef samples, then allocated them to a quality grade; unsatisfactory, good-every-day, better-than-every-day and premium. The consumers also scored beef samples for tenderness, juiciness, flavour-liking and overall-liking. The beef was sourced from all countries involved in the study and cooked by four different cooking methods and to three different degrees of doneness, with each experimental group in the study consisting of a single cooking doneness within a cooking method for each country. For each experimental group, and for the data set as a whole, a linear discriminant function was calculated, using the four sensory scores which were used to predict the quality grade. This process was repeated using two conglomerate scores which are derived from weighting and combining the consumer sensory scores for tenderness, juiciness, flavour-liking and overall-liking, the original meat quality 4 score (oMQ4) (0.4, 0.1, 0.2, 0.3) and current meat quality 4 score (cMQ4) (0.3, 0.1, 0.3, 0.3). From the results of these analyses, the optimal weightings of the sensory scores to generate an ‘ideal meat quality 4 score (MQ4)’ for each country were calculated, and the MQ4 values that reflected the boundaries between the four quality grades were determined. The oMQ4 weightings were far more accurate in categorising European meat samples than the cMQ4 weightings, highlighting that tenderness is more important than flavour to the consumer when determining quality. The accuracy of the discriminant analysis to predict the consumer scored quality grades was similar across all consumer groups, 68%, and similar to previously reported values. These results demonstrate that this technique, as used in the MSA system, could be used to predict consumer assessment of beef eating quality and therefore to underpin a commercial eating quality guarantee for all European consumers.
Effects of plant maturity on apparent ruminal synthesis and post-ruminal supply of B vitamins were evaluated in two feeding trials. Diets containing alfalfa (Trial 1) or orchardgrass (Trial 2) silages harvested either (1) early cut, less mature (EC) or (2) late cut, more mature (LC) as the sole forage were offered to ruminally and duodenally cannulated lactating Holstein cows in crossover design experiments. In Trial 1, conducted with 16 cows (569±43 kg of empty BW (ruminal content removed) and 43.7±8.6 kg/day of 3.5% fat-corrected milk yield; mean±SD) in two 17-day treatment periods, both diets provided ~22% forage NDF and 27% total NDF, and the forage-to-concentrate ratios were 53 : 47 and 42 : 58 for EC and LC, respectively. In Trial 2, conducted with 13 cows (588±55 kg of empty BW and 43.7±7.7 kg/day of 3.5% fat-corrected milk yield; mean±SD) in two 18-day treatment periods, both diets provided ~25% forage NDF and 31% total NDF; the forage-to-concentrate ratios were 58 : 42 and 46 : 54 for EC and LC, respectively. Thiamin, riboflavin, niacin, vitamin B6, folates and vitamin B12 were measured in feed and duodenal content. Apparent ruminal synthesis was calculated as the duodenal flow minus the intake. Diets based on EC alfalfa decreased the amounts of thiamin, niacin and folates reaching the duodenum, whereas diets based on EC orchardgrass increased riboflavin duodenal flow. Daily apparent ruminal synthesis of thiamin, riboflavin, niacin and vitamin B6 were correlated negatively with their intake, suggesting a microbial regulation of their concentration in the rumen. Vitamin B12 apparent ruminal synthesis was correlated negatively with total volatile fatty acids concentration, but positively with ruminal pH and microbial N duodenal flow.
Recent commentary has suggested that performance management (PM) is fundamentally “broken,” with negative feelings from managers and employees toward the process at an all-time high (Pulakos, Hanson, Arad, & Moye, 2015; Pulakos & O'Leary, 2011). In response, some high-profile organizations have decided to eliminate performance ratings altogether as a solution to the growing disenchantment. Adler et al. (2016) offer arguments both in support of and against eliminating performance ratings in organizations. Although both sides of the debate in the focal article make some strong arguments both for and against utilizing performance ratings in organizations, we believe there continue to be misunderstandings, mischaracterizations, and misinformation with respect to some of the measurement issues in PM. We offer the following commentary not to persuade readers to adopt one particular side over another but as a call to critically reconsider and reevaluate some of the assumptions underlying measurement issues in PM and to dispel some of the pervasive beliefs throughout the performance rating literature.
Background: Epileptic encephalopathy (EE) is a severe condition in which epileptic activity itself may contribute to severe cognitive and behavioural impairments above and beyond what might be expected from the underlying pathology alone. Next generation sequencing technologies such as whole exome sequencing (WES) can detect underlying genetic causes of in EE. Methods: This report describes genotype-phenotype correlation of 29 subjects with unexplained epileptic encephalopathy, in whom WES, targeting a list of 557 epilepsy-associated genes was performed. Epilepsy phenotyping was done according to current ILAE recommendations. Results: Median age at seizure onset was 14 months (range 1-48). Electroclinical syndromes were applicable for 16/29, 8/16 had a definite/likely diagnosis. 6/8 subjects with West syndrome had variants in ALG13, STXBP1, PAFAH1B1, SLC35A2, CDKL5 and ADSL. 2 patients with Dravet syndrome had variants in SCN1A and PCDH19 respectively. 4/29 had unspecified EE and definite/likely diagnosis due to STXBP1, POLG, and KCNQ2 (2) variants. 4/29 had a possible diagnosis involving GABRB3, ARHGEF9, PCDH19 and SCN3A variants. Conclusions: The high diagnostic yield (definite/likely diagnosis in 11/29 = 38%), involving a broad variety of epilepsy-associated genes in different electroclinical syndromes justifies the diagnostic approach of early onset EE by next generation sequencing.