To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The aim of this study was to evaluate the effectiveness of training programs in improving the knowledge about disaster management among Accredited Social Health Activists (ASHAs) in Mysuru, India.
A quasi-experimental study was conducted among 40 ASHAs of 3 Primary Health Centers in Mysuru district. A 3-h disaster management training and workshop followed by a mock-drill was organized in each center. Knowledge about disaster preparedness and management was assessed before and 1 mo after the intervention using a questionnaire by interview method. The data obtained were entered into an MS Excel spreadsheet and analyzed using licensed SPSS 22 software.
The mean score obtained by the ASHAs in pretraining assessment was 37.2 ± 10.4. Improvement was evident in the knowledge and preparedness of ASHAs 1 mo after the training, which showed a mean score of 90.14 ± 5.05. This change in score was statistically significant with a P-value < 0.001 on performing a paired t-test.
Training programs with mock drills and hands-on activities are effective in improving the knowledge of frontline health workers about disaster management. We recommend such training to be organized in all public health facilities.
Hurricane Sandy made landfall in New Jersey on October 29, 2012, resulting in widespread power outages and gasoline shortages. These events led to potentially toxic exposures and the need for information related to poisons/toxins in the environment. This report characterizes the New Jersey Poison Information and Education System (NJPIES) call patterns in the days immediately preceding, during, and after Hurricane Sandy to identify areas in need of public health education and prevention.
We examined NJPIES case data from October through December 2012. Most Sandy-related calls had been coded as such by NJPIES staff. Additional Sandy-related cases were identified by performing a case narrative review. Descriptive analyses were performed for timing, case frequencies, exposure substances, gender, caller site, type of information requests, and other data.
The most frequent Sandy-related exposures were gasoline and carbon monoxide (CO). Gasoline exposure cases were predominantly males and CO exposure cases, females (P < 0.0001). Other leading reasons for Sandy-related calls were poison information, food poisoning/spoilage information, and water contamination.
This analysis identified the need for enhanced public health education and intervention to improve the handling of gasoline and encourage the proper use of gasoline-powered generators and cleaning and cooking equipment, thus reducing toxic exposures.
We implemented universal severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) testing of patients undergoing surgical procedures as a means to conserve personal protective equipment (PPE). The rate of asymptomatic coronavirus disease 2019 (COVID-19) was <0.5%, which suggests that early local public health interventions were successful. Although our protocol was resource intensive, it prevented exposures to healthcare team members.
A well-functioning democracy requires a degree of mutual respect and a willingness to talk across political divides. Yet numerous studies have shown that many electorates are polarized along partisan lines, with animosity towards the partisan out-group. This article further develops the idea of affective polarization, not by partisanship, but instead by identification with opinion-based groups. Examining social identities formed during Britain's 2016 referendum on European Union membership, the study uses surveys and experiments to measure the intensity of partisan and Brexit-related affective polarization. The results show that Brexit identities are prevalent, felt to be personally important and cut across traditional party lines. These identities generate affective polarization as intense as that of partisanship in terms of stereotyping, prejudice and various evaluative biases, convincingly demonstrating that affective polarization can emerge from identities beyond partisanship.
Approximately, 1.7 million individuals in the United States have been infected with SARS-CoV-2, the virus responsible for the novel coronavirus disease-2019 (COVID-19). This has disproportionately impacted adults, but many children have been infected and hospitalised as well. To date, there is not much information published addressing the cardiac workup and monitoring of children with COVID-19. Here, we share the approach to the cardiac workup and monitoring utilised at a large congenital heart centre in New York City, the epicentre of the COVID-19 pandemic in the United States.
For a test to be useful, it must be informative; that is, it must (at least some of the time) give different results depending on what is going on. In Chapter 1, we said we would simplify (at least initially) what is going on into just two homogeneous alternatives, D+ and D−. In this chapter, we consider the simplest type of tests, dichotomous tests, which have only two possible results (T+ and T−).
A test should give the same or similar results when administered repeatedly to the same individual within a time too short for real biological variation to take place. Results should be consistent whether the test is repeated by the same observer or instrument or by different observers or instruments. This desirable characteristic of a test is called “reliability” or “reproducibility.”
We have learned how to quantify the accuracy of dichotomous (Chapter 2) and multilevel (Chapter 3) tests. In this chapter, we turn to critical appraisal of studies of diagnostic test accuracy, with an emphasis on problems with study design that affect the interpretation or credibility of the results. After a general discussion of an approach to studies of diagnostic tests, we will review some common biases to which studies of test accuracy are uniquely or especially susceptible and conclude with an introduction to systematic reviews of test accuracy studies.
While screening tests share some features with diagnostic tests, they deserve a chapter of their own because of important differences. Whereas we generally do diagnostic tests on sick people to determine the cause of their symptoms, we generally do screening tests on healthy people with a low prior probability of disease. The problems of false positives and harms of treatment loom larger. In Chapter 4, on evaluating studies of diagnostic test accuracy, we assumed that accurate diagnosis would lead to better outcomes. The benefits and harms of screening tests are so closely tied to the associated treatments that it is hard to evaluate diagnosis and treatment separately. Instead, we compare outcomes such as mortality between those who receive the screening test and those who don’t. We postponed our discussion of screening until after our discussion of randomized trials because randomized trials are a key element in the evaluation of screening tests. Finally, because decisions about screening are often made at the population level, political and other nonmedical factors are more influential. Thus, in this chapter, we focus explicitly on the question of whether doing a screening test improves health, not just on how it alters disease probabilities, and we pay particular attention to biases and nonmedical factors that can lead to excessive screening.1
In previous chapters, we discussed issues affecting evaluation and use of diagnostic tests: how to assess test reliability and accuracy, how to combine the results of tests with prior information to estimate disease probability, and how a test’s value depends on the decision it will guide and the relative cost of errors. In this chapter, we move from diagnosing prevalent disease to predicting incident outcomes. We will discuss the difference between diagnostic tests and risk predictions and then focus on evaluating predictions, specifically covering calibration, discrimination, net benefit calculations, and decision curves.
As we noted in the Preface and Chapter 1, because the purpose of doing diagnostic tests is often to determine how to treat the patient, we may need to quantify the effects of treatment to decide whether to do a test. For example, if the treatment for a disease provides a dramatic benefit, we should have a lower threshold for testing for that disease than if the treatment is of marginal or unknown efficacy. In Chapters 2, 3, and 6, we showed how the expected benefit of testing depends on the treatment threshold probability (PTT = C/[C + B]) in addition to the prior probability and test characteristics. In this chapter, we discuss how to quantify the benefits and harms of treatments (which determine C and B) using the results of randomized trials. In Chapter 9, we will extend the discussion to observational studies of treatment efficacy; in Chapter 10, we will look at screening tests themselves as treatments and how to quantify their efficacy.