To send content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about sending content to .
To send content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The number, intensity, and impact of diverse forms of 'natural' and 'human-made' disasters are increasing. In response, the international community has shifted its primary focus away from disaster response to prevention and improved preparedness. The current globally agreed upon roadmap is the ambitious Sendai Framework for Disaster Risk Reduction 2015–2030, central to which is the better understanding of disaster risk management and mitigation. Sendai also urges innovative implementation, especially multi-sectoral and multi-hazard coherence. Yet the law sector itself remains relatively under-developed, including a paucity of supporting 'DRR law' scholarship and minimal cross-sectoral engagement. Commonly, this is attributable to limited understanding by other sectors about law's dynamic potential as a tool of disaster risk mitigation, despite the availability of many risk-related norms across a broad spectrum of legal regimes. This unique, timely Handbook brings together global and multi-sector perspectives on one of the most pressing policy issues of our time.
The US Centers for Disease Control and Prevention (CDC)-funded Preparedness and Emergency Response Research Centers (PERRCs) conducted research from 2008 to 2015 aimed to improve the complex public health emergency preparedness and response (PHEPR) system. This paper summarizes PERRC studies that addressed the development and assessment of criteria for evaluating PHEPR and metrics for measuring their efficiency and effectiveness.
We reviewed 171 PERRC publications indexed in PubMed between 2009 and 2016. These publications derived from 34 PERRC research projects. We identified publications that addressed the development or assessment of criteria and metrics pertaining to PHEPR systems and describe the evaluation methods used and tools developed, the system domains evaluated, and the metrics developed or assessed.
We identified 29 publications from 12 of the 34 PERRC projects that addressed PHEPR system evaluation criteria and metrics. We grouped each study into 1 of 3 system domains, based on the metrics developed or assessed: (1) organizational characteristics (n = 9), (2) emergency response performance (n = 12), and (3) workforce capacity or capability (n = 8). These studies addressed PHEPR system activities including responses to the 2009 H1N1 pandemic and the 2011 tsunami, as well as emergency exercise performance, situational awareness, and workforce willingness to respond. Both PHEPR system process and outcome metrics were developed or assessed by PERRC studies.
PERRC researchers developed and evaluated a range of PHEPR system evaluation criteria and metrics that should be considered by system partners interested in assessing the efficiency and effectiveness of their activities. Nonetheless, the monitoring and measurement problem in PHEPR is far from solved. Lack of standard measures that are readily obtained or computed at local levels remains a challenge for the public health preparedness field. (Disaster Med Public Health Preparedness. 2019;13:626-638)
The Best Practices in Social and Behavioral Research Course was developed to provide instruction on good clinical practice for social and behavioral trials. This study evaluated the new course.
Participants across 4 universities took the course (n=294) and were sent surveys following course completion and 2 months later. Outcomes included relevance, how engaging the course was, and working differently because of the course. Open-ended questions were posed to understand how work was impacted.
Participants rated the course as relevant and engaging (6.4 and 5.8/7 points) and reported working differently (4.7/7 points). Participants with less experience in social and behavioral trials were most likely to report working differently 2 months later.
The course was perceived as relevant and engaging. Participants described actions taken to improve rigor in implementing trials. Future studies with a larger sample and additional participating sites are recommended.
It is not clear how to effectively recruit healthy research volunteers.
We developed an electronic health record (EHR)-based algorithm to identify healthy subjects, who were randomly assigned to receive an invitation to join a research registry via the EHR’s patient portal, letters, or phone calls. A follow-up survey assessed contact preferences.
The EHR algorithm accurately identified 858 healthy subjects. Recruitment rates were low, but occurred more quickly via the EHR patient portal than letters or phone calls (2.7 vs. 19.3 or 10.4 d). Effort and costs per enrolled subject were lower for the EHR patient portal (3.0 vs. 17.3 or 13.6 h, $113 vs. $559 or $435). Most healthy subjects indicated a preference for contact via electronic methods.
Healthy subjects can be accurately identified from EHR data, and it is faster and more cost-effective to recruit healthy research volunteers using an EHR patient portal.