To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A number of genomic conditions caused by copy number variants (CNVs) are associated with a high risk of neurodevelopmental and psychiatric disorders (ND-CNVs). Although these patients also tend to have cognitive impairments, few studies have investigated the range of emotion and behaviour problems in young people with ND-CNVs using measures that are suitable for those with learning difficulties.
A total of 322 young people with 13 ND-CNVs across eight loci (mean age: 9.79 years, range: 6.02–17.91, 66.5% male) took part in the study. Primary carers completed the Developmental Behaviour Checklist (DBC).
Of the total, 69% of individuals with an ND-CNV screened positive for clinically significant difficulties. Young people from families with higher incomes (OR = 0.71, CI = 0.55–0.91, p = .008) were less likely to screen positive. The rate of difficulties differed depending on ND-CNV genotype (χ2 = 39.99, p < 0.001), with the lowest rate in young people with 22q11.2 deletion (45.7%) and the highest in those with 1q21.1 deletion (93.8%). Specific patterns of strengths and weaknesses were found for different ND-CNV genotypes. However, ND-CNV genotype explained no more than 9–16% of the variance, depending on DBC subdomain.
Emotion and behaviour problems are common in young people with ND-CNVs. The ND-CNV specific patterns we find can provide a basis for more tailored support. More research is needed to better understand the variation in emotion and behaviour problems not accounted for by genotype.
This study assesses the association between living in a food desert and cardiovascular health risk among young adults in the USA, as well as evaluates whether personal and area socioeconomic status moderates this relationship.
A cross-sectional analysis was performed using data from Wave I (1993–1994) and Wave IV (2008) from the National Longitudinal Study of Adolescent to Adult Health. Ordinary least squares regression models assessing the association between living in a food desert and cardiovascular health were performed. Mediation and moderation analyses assessed the degree to which this association was conditioned by area and personal socioeconomic status.
Sample of respondents living in urban census tracts in the USA in 2008.
Young adults (n 8896) aged 24–34 years.
Net of covariates living in a food desert had a statistically significant association with cardiovascular health risk (range 0–14) (β = 0·048, P < 0·01). This association was partially mediated by area and personal socioeconomic status. Further analyses demonstrate that the adverse association between living in a food desert and cardiovascular health is concentrated among low socioeconomic status respondents.
The findings from this study suggest a complex interplay between food deserts and economic conditions for the cardiovascular health of young adults. Developing interventions that aim to improve health behaviour among lower-income populations may yield benefits for preventing the development of cardiovascular health problems.
The Fontan Outcomes Network was created to improve outcomes for children and adults with single ventricle CHD living with Fontan circulation. The network mission is to optimise longevity and quality of life by improving physical health, neurodevelopmental outcomes, resilience, and emotional health for these individuals and their families. This manuscript describes the systematic design of this new learning health network, including the initial steps in development of a national, lifespan registry, and pilot testing of data collection forms at 10 congenital heart centres.
When setting priorities for health, there is broad agreement that a range of social values and ethical principles beyond clinical and cost-effectiveness matter, but exactly how health technology assessment (HTA) should account for a broader set of criteria remains an area of ongoing debate. In light of this, we welcome a recent review paper by Baltussen et al. evaluating the potential of different multi-criteria decision analysis (MCDA) approaches to enable HTA agencies to incorporate a broader set of values in their appraisals. The authors describe three approaches to MCDA—qualitative MCDA, quantitative MCDA, and MCDA with decision rules—laying out their relative advantages and disadvantages and providing recommendations for how they can best be implemented. While we endorse many of the authors' assessments and conclusions, including the critical role of deliberation in any MCDA approach and the undertaking of qualitative MCDA at a minimum, we take a stronger position regarding the flaws of quantitative MCDA and strongly caution against it. We find quantitative MCDA antithetical to at least two of the ways MCDA is intended to improve HTA recommendations: (i) enhancing quality and (ii) promoting transparency. Quantitative MCDA may mask the complex tradeoffs that exist within and between decision criteria and remain generally inaccessible to those who are not well-versed in its technical methods of appraisal. We advocate for a predominantly qualitative approach to MCDA appraisal centered around deliberation and supplemented with decision aids to help account for health opportunity costs.
Approximately, 1.7 million individuals in the United States have been infected with SARS-CoV-2, the virus responsible for the novel coronavirus disease-2019 (COVID-19). This has disproportionately impacted adults, but many children have been infected and hospitalised as well. To date, there is not much information published addressing the cardiac workup and monitoring of children with COVID-19. Here, we share the approach to the cardiac workup and monitoring utilised at a large congenital heart centre in New York City, the epicentre of the COVID-19 pandemic in the United States.
For a test to be useful, it must be informative; that is, it must (at least some of the time) give different results depending on what is going on. In Chapter 1, we said we would simplify (at least initially) what is going on into just two homogeneous alternatives, D+ and D−. In this chapter, we consider the simplest type of tests, dichotomous tests, which have only two possible results (T+ and T−).
A test should give the same or similar results when administered repeatedly to the same individual within a time too short for real biological variation to take place. Results should be consistent whether the test is repeated by the same observer or instrument or by different observers or instruments. This desirable characteristic of a test is called “reliability” or “reproducibility.”
We have learned how to quantify the accuracy of dichotomous (Chapter 2) and multilevel (Chapter 3) tests. In this chapter, we turn to critical appraisal of studies of diagnostic test accuracy, with an emphasis on problems with study design that affect the interpretation or credibility of the results. After a general discussion of an approach to studies of diagnostic tests, we will review some common biases to which studies of test accuracy are uniquely or especially susceptible and conclude with an introduction to systematic reviews of test accuracy studies.
While screening tests share some features with diagnostic tests, they deserve a chapter of their own because of important differences. Whereas we generally do diagnostic tests on sick people to determine the cause of their symptoms, we generally do screening tests on healthy people with a low prior probability of disease. The problems of false positives and harms of treatment loom larger. In Chapter 4, on evaluating studies of diagnostic test accuracy, we assumed that accurate diagnosis would lead to better outcomes. The benefits and harms of screening tests are so closely tied to the associated treatments that it is hard to evaluate diagnosis and treatment separately. Instead, we compare outcomes such as mortality between those who receive the screening test and those who don’t. We postponed our discussion of screening until after our discussion of randomized trials because randomized trials are a key element in the evaluation of screening tests. Finally, because decisions about screening are often made at the population level, political and other nonmedical factors are more influential. Thus, in this chapter, we focus explicitly on the question of whether doing a screening test improves health, not just on how it alters disease probabilities, and we pay particular attention to biases and nonmedical factors that can lead to excessive screening.1
In previous chapters, we discussed issues affecting evaluation and use of diagnostic tests: how to assess test reliability and accuracy, how to combine the results of tests with prior information to estimate disease probability, and how a test’s value depends on the decision it will guide and the relative cost of errors. In this chapter, we move from diagnosing prevalent disease to predicting incident outcomes. We will discuss the difference between diagnostic tests and risk predictions and then focus on evaluating predictions, specifically covering calibration, discrimination, net benefit calculations, and decision curves.
As we noted in the Preface and Chapter 1, because the purpose of doing diagnostic tests is often to determine how to treat the patient, we may need to quantify the effects of treatment to decide whether to do a test. For example, if the treatment for a disease provides a dramatic benefit, we should have a lower threshold for testing for that disease than if the treatment is of marginal or unknown efficacy. In Chapters 2, 3, and 6, we showed how the expected benefit of testing depends on the treatment threshold probability (PTT = C/[C + B]) in addition to the prior probability and test characteristics. In this chapter, we discuss how to quantify the benefits and harms of treatments (which determine C and B) using the results of randomized trials. In Chapter 9, we will extend the discussion to observational studies of treatment efficacy; in Chapter 10, we will look at screening tests themselves as treatments and how to quantify their efficacy.
In the previous two chapters, we discussed using the results of randomized trials and observational studies to estimate treatment effects. We were primarily interested in measures of effect size and in problems with design (in randomized trials) and confounding (in observational studies) that could bias effect estimates. We did not focus on whether the apparent treatment effects could be a result of chance or attempt to quantify the precision of our effect estimates. The statistics used to help us with these issues − P-values and confidence intervals – are the subject of this chapter.