Skip to main content Accessibility help


  • Access
  • Open access


      • Send article to Kindle

        To send this article to your Kindle, first ensure is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

        Note you can select to send to either the or variations. ‘’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

        Find out more about the Kindle Personal Document Service.

        Enhancing Reporting of After Action Reviews of Public Health Emergencies to Strengthen Preparedness: A Literature Review and Methodology Appraisal
        Available formats

        Send article to Dropbox

        To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

        Enhancing Reporting of After Action Reviews of Public Health Emergencies to Strengthen Preparedness: A Literature Review and Methodology Appraisal
        Available formats

        Send article to Google Drive

        To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

        Enhancing Reporting of After Action Reviews of Public Health Emergencies to Strengthen Preparedness: A Literature Review and Methodology Appraisal
        Available formats
Export citation



This literature review aimed to identify the range of methods used in after action reviews (AARs) of public health emergencies and to develop appraisal tools to compare methodological reporting and validity standards.


A review of biomedical and gray literature identified key approaches from AAR methodological research, real-world AARs, and AAR reporting templates. We developed a 50-item tool to systematically document AAR methodological reporting and a linked 11-item summary tool to document validity. Both tools were used sequentially to appraise the literature included in this study.


This review included 24 highly diverse papers, reflecting the lack of a standardized approach. We observed significant divergence between the standards described in AAR and qualitative research literature, and real-world AAR practice. The lack of reporting of basic methods to ensure validity increases doubt about the methodological basis of an individual AAR and the validity of its conclusions.


The main limitations in current AAR methodology and reporting standards may be addressed through our 11 validity-enhancing recommendations. A minimum reporting standard for AARs could help ensure that findings are valid and clear for others to learn from. A registry of AARs, based on a common reporting structure, may further facilitate shared learning. (Disaster Med Public Health Preparedness. 2019;13:618-625)

Public health emergencies, such as infectious disease outbreaks, floods, and terrorist attacks, impact societies severely but are relatively rare for individual countries. However, this national rarity provides an impetus to systematically learn from emergencies when they do occur, so as to strengthen public health emergency preparedness and response planning. 1

One such learning approach is to conduct an after action review (AAR), or a lessons learned document. These documents are completed after a public health emergency has occurred and draw on quantitative and qualitative methods to identify strengths and weaknesses in the public health emergency preparedness system. By addressing any weaknesses identified, they aim to improve preparedness, response, and recovery capacities and capabilities, ultimately lessening the impact of future incidents. 2 , 3

Typically, documentation and other quantitative fact-finding methods help establish a skeleton timeline of events, whereas different forms of qualitative investigation, such as personal testimony, provide richer insights into how and why events unfolded. Combined, these approaches aim to establish the root causes of the event and to identify what lessons can be learned for the future. 2 9

Despite the crucial role of AARs in linking the past with the present and future, there is no widely used or standardized approach to conducting AARs of public health emergencies. Particularly, there is no indication of whether insights gained are valid or based on robust methodologies. 1 , 9

This literature review aimed to identify the range of methods used to produce AARs to improve emergency preparedness planning and to develop appraisal tools to compare their methodological reporting and validity standards, with a focus on qualitative methods.


Literature Search

We searched biomedical databases (Medline, Embase, Scopus) and gray literature sources (Google Advanced, Google Scholar) for AARs that described an enacted response to an emergency (theoretical or “table-top” exercises were excluded), were within the geographic scope of the literature review (the European Union, Australia, Canada, New Zealand, and the United States), and were published in English from January 2000 to August 2015.

Search strategies were structured around 2 major concepts: AARs and emergency preparedness. Searches combined free text and thesaurus terms (where available), including synonyms such as “post-event analysis” and “critical incident review” and techniques used within AARs such as “facilitated look back” and “root-cause analysis” (Supplemental Information [SI] 1). Additional search terms and synonyms were identified by scanning the abstracts of articles identified through a scoping search. Additional AARs were identified by searching the Endnote Library for a previous review undertaken for the European Centre for Disease Prevention and Control (ECDC), looking for evaluations of emergency response. 10 , 11

Reviews were sifted for relevance first on title and abstract and then on full-text review (Figure 1, PRISMA diagram). Studies excluded at the full-text stage can be found in SI-2.

Figure 1 PRISMA diagram

Development of Appraisal Tools

We developed 2 appraisal tools to systematically document the methods used in AARs, to compare methodological reporting and validity between diverse AARs, and to act as a benchmark of theoretical best practice.

We adapted the approach of Woloshynowych 12 – which related to the analysis of after actions in health care – to an emergency public health context by triangulating it with 9 contemporary AAR templates. 5 , 13 20 The templates were identified through targeted scoping searches in Google, using synonyms for AARs and templates. These templates were multi-sectorial, coming from after action reports, a significant event analysis, and peer assessments in the fields of US national defense, 14 US state government, 13 UK medicolegal, 17 Canadian health care insurance, 20 international emergency public health, 5 , 16 a UK hospital, 15 and patient safety agencies (See SI-2). 18 , 19 Further tool modifications were made in consultation with an expert advisor to increase its relevance to emergency public health. This resulted in a 50-item appraisal tool (SI-3).

Adapting the approach of Piltch-Loeb, 5 we developed an additional 11-point summary tool of factors that boosted methodological rigor in case study and qualitative data collection and analysis.

The original Piltch-Loeb 10-point tool remained intact with minor revisions in definitions to better reflect the context of AARs in emergency public health. We added an 11th factor to capture whether the AAR had ultimately achieved its aim of uncovering the root causes of preparedness, response and recovery activities, rather than more superficial causes. Definitions of the 11 points are included in SI-4.

Appraising the After Action Reviews

The 50-item appraisal tool (SI-3) and 11-item summary measure (SI-4) were applied sequentially to each AAR. First, the 50-item tool was used to systematically document the methods undertaken by each AAR, before being summarized in the 11-item measure, allowing for a simpler comparison of methodology and validity across diverse reviews.

AARs were reviewed against each item on the summary validity tool and assigned one of 3 codes. Fully met (++): These criteria have been fully and often comprehensively met, and we have little doubt that these criteria have been met. Partially met (+): The criteria have been met in some regards, but there is significant doubt about the comprehensiveness or there are clear elements missing, preventing a higher rating. Not met (-): These criteria are not met or have not been reported.

A sample of 3 AARs was independently coded by a second reviewer to test the reliability of the coding instrument and to clarify initial rating definitions. The second rater was blind to the first rater’s scores and rationales. Given the size of the sample, inter-coder agreement was not calculated. Differences between the 2 raters were discussed and changes agreed by consensus. This led to revisions in the wording of some criteria and scoring guidance to improve clarity and therefore scoring consistency. Definitions of the criteria and additional notes used to guide rating decisions are described in SI-3.



Our search identified 24 published AAR documents, relating to 22 distinct AARs (Table 1).

Table 1 Summary of 22 Included AARs

* A risk evaluation method that can be used to analyze and demonstrate causal relationships in high risk scenarios.

The reviews covered national and international responses to the 2009 A(H1N1) influenza pandemic (n = 8), 21 28 terrorist bombing incidents (n = 5), 29 33 industrial explosions (n = 6), 34 39 hurricanes (n = 2), 40 , 41 chemical contamination of drinking water (n = 1), 42 a heat wave (n = 1), 43 and large-scale flooding (n = 1) (see Table 1). 44

Appraisal of After Action Reviews

There was great diversity in the structure, scope, and level of methodological reporting in the 24 reviews identified, potentially reflecting a lack of a standardized approach (Table 2). 21 44 The majority drew heavily on qualitative methods, but the use of established techniques to ensure rigor was routinely missing from the published reports.

Table 2 Summary Validity Measures Reporting for 22 AARs (Including 2 Annexes Appraised Alongside the AAR) 21 44

* Overall validity score based on the following scoring: (++) = 2; (+) = 1; (-) = 0.

Validity boosting measures most frequently reported in the 24 reviews included spending adequate time to observe the setting, people, and incident documentation; sampling a diverse range of views; using multiple sources of data collection; and utilizing multiple perspectives during the analysis. 21 44 However, these techniques were generally reported in brief, with few reviews fully meeting all 4 basic validity dimensions.

The criteria that were most commonly unmet in these reports were acknowledging a theoretical basis for the review methodology; describing how the reviewers handled discordant evidence; having an external peer-review process; and ensuring respondents to the reviews had an opportunity to validate that their views had been reflected accurately in the final analysis and report (see Table 2).

The majority of AARs showing depth and insight (9 fully met this validity measure) also clearly reported using multiple data sources (7 of 9) and sustained engagement (5 of 9). Other AARs demonstrated depth and insight without reporting clear methods (see Table 2). 29 , 34 , 35 , 44


Based on the systematic assessment of methods and validity measures in 24 AARs, we suggest 11 measures to improve the reporting and validity of reviews more widely (Table 3).

Table 3 Eleven Validity-Enhancing Considerations for Improving Review and Reporting of Public Health Emergency Events

PHEP = public health emergency preparedness.

* The development of an evidence-based minimum reporting standard for after action reviews, similar to the Consolidated Standards of Reporting Trials (CONSORT) statement for randomized controlled trials, may facilitate this process and comparisons between AARs. See


To our knowledge, this is the first review to systematically document methods used in public health emergency preparedness AARs across a range of hazards and to formulate suggestions to improve future practice based on principles of qualitative research best practice.

The strengths of this review include our inclusive definition of an AAR, our inclusion of non-health-care specific after actions and reporting templates, and the development of tools rooted in after action methodological research. These tools were applied to a variety of real-world AARs in the field of emergency preparedness spanning multiple hazard types.

The most common data collection methods used by the 24 AARs were document review (typically preparedness plans and protocols compared to execution), focus groups, formal public consultations, in-depth interviews, public discussion forums, questionnaires, site visits, and workshops.

Most reviews (17 of 24) did not report a theoretical framework to guide investigation; of those that did, all reported a comparative or case study methodology. This represents a small fraction of the diverse range of approaches available to after action investigators, including the after action technique 4 , 8 ; after action analysis 7 , 45 ; root-cause analysis 46 48 ; facilitated look-backs 49 ; peer assessment approach 6 ; realist evaluation 5 , 9 ; bow-tie analysis 39 ; and serious case reviews. 50

Underlying methodologies were frequently unreported, so the report validity remained ambiguous. Although a lack of reporting of basic methods to safeguard validity does not necessarily imply that they were not considered or followed, it does significantly increase doubt surrounding the methodological basis of the review and the validity of its conclusions.


Our review searched for reports from a diverse range of after actions, but the analyzed sample was small (n = 24) and subject to reporting and selection bias, and may not represent the full spectrum of incident reports available. For example, we excluded 16 studies with insufficient methods for analysis (see SI-2: Excluded Studies) and all reviews not published in English.

Three of the 24 included reviews were used to test and develop early versions of both appraisal tools before their final application to the remaining 21 reports, further reducing the number of independent reviews appraised.

Most AAR reports were not clear on how their data analysis led to generalizable insights by reviewers or how discordant information was handled. 22 , 28 , 29 As such, it was not clear to what extent certain views or data had been explored or discounted, for example, if they did not fit with the emerging researcher consensus. This risked introducing perception bias into the analysis and conclusions drawn.


We suggest that the lack of methodological reporting provides a strong case for the development of evidence-based minimum reporting standard for AARs, akin to the CONSORT statement for randomized controlled trials. These standards could benefit after action reports in 2 ways. First, they may ensure that a wider range of robust methods is considered before and during the review, and, second, that methods are more clearly reported in the end report itself, allowing an external assessment of validity. The 11-point summary tool presented here allows a simple validity comparison to be made across a range of diverse AARs, which could be further developed and refined in the future.

It is noteworthy that critical incident registries have been adopted in transport, health care, and workplace safety industries, but not in emergency preparedness. 5 We thus advocate an AAR registry (similar in nature to the US government’s Lessons Learned Information Sharing program) in Europe, to facilitate cross-border learning that will further strengthen emergency preparedness. 51 The 11-point summary validity tool presented here could contribute to such an initiative by promoting an AAR design that is as robust and credible as possible.

Supplementary material

To view supplementary material for this article, please visit https://10.1017/dmp.2018.82

Acknowledgment and Author Contributions

This publication is based upon a report produced by Bazian Ltd and commissioned by the ECDC under Direct Service Contract ECD.5860. Robert Davies provided input into project design, performed data extraction, performed data synthesis, and coauthored this manuscript; Elly Vaughan managed the project at Bazian, provided input into project design, designed and ran literature searches, performed data extraction, contributed to the synthesis, and coauthored this manuscript; Dr Robert Cook provided input into project design, reviewed draft reports, and provided project oversight; Dr Graham Fraser, Dr Massimo Ciotti, and Dr Jonathan Suk initiated the study and commissioned the work, provided technical guidance throughout the study, and coauthored this manuscript; Dr Katie Geary provided expert advice throughout the project design and execution, including refining the appraisal tools for a public health emergency context.


1. ECDC. Meeting report: ad hoc advisory meeting on preparedness Stockholm, 15-16 May 2014. Stockholm: European Centre for Disease Prevention and Control. 2014. Accessed May 25, 2018.
2. Stoto, MA. Measuring and assessing public health emergency preparedness. J Public Health Manag Pract. 2013;19:S16S21.
3. Geary, K. (International SOS). Personal communication. Rob Davies (Bazian Ltd). 2015.
4. Flanaghan, JC. The critical incident technique. Psychol Bull. 1954;51(4):327358.
5. Piltch-Loeb, RN, Nelson, C, Kraemer, J, et al. Peer assessment of public health emergency response toolkit. Cambridge (MA): Harvard School of Public Health. 2014. Accessed May 25, 2018.
6. Piltch-Loeb, RN, Nelson, CD, Kraemer, JD, et al. A peer assessment approach for learning from public health emergencies. Improv Sys Pract. 2014; 129(Suppl 4):2834.
7. Schwester, RW. Handbook of Critical Incident Analysis. New York (NY): Routledge (Taylor & Francis Group). 2014.
8. Serrat, O. The critical incident technique. Washington, DC: Asian Development Bank. 2010.
9. Stoto, MA. Getting from what to why: using qualitative methods in public health systems research. Orlando (FL): PHSR IG Methods Panel. 2012. Accessed May 26, 2018.
10. ECDC. Best practices in ranking emerging infectious disease threats: a literature review. Stockholm European Centre for Disease Prevention and Control. 2015. Accessed May 26, 2018.
11. O’Brien, EC, Taft, R, Geary, K, et al. Best practices in ranking communicable disease threats: a literature review, 2015. Euro Surveill. 2016; 21(17):pii=30212.
12. Woloshynowych, N, Rogers, S, Taylor-Adams, S, et al. The investigation and analysis of critical incidents and adverse events in healthcare. Health Technol Assess. 2005; 9(19):1143, iii.
13. CGOES. Standardized emergency management system: after action report. California: California Governor’s Office of Emergency Services. 2013. Accessed May 26, 2018.
14. HSEEP. After-action report/improvement plan. Washington (DC): Homeland Security Exercise and Evaluation Program. 2013. Accessed May 26, 2018.
15. Taylor-Adams, S, Vincent, C. Systems analysis of clinical incidents: the London protocol. Imperial College London. 2004. Accessed May 27, 2018.
16. ISOS. Significant event analysis reporting form. London: International SOS. 2015.
17. MDU. Medico-legal guide to serious event analysis. London: Medical Defence Union. 2014. Accessed May 27, 2018.
18. NPSA. A quick guide to conducting a significant event audit. London: National Patient Safety Agency. 2008. Accessed May 27, 2018.
19. NPSA. Significant event audit: guidance for primary care teams. London: National Patient Safety Agency. 2008. Accessed May 27, 2018.
20. HIROC. Critical incidents and multi-patient events: risk resource guide. Toronto (ON): Healthcare Insurance Reciprocal of Canada. 2015. Accessed May 27, 2018.
21. Masotti, P, Green, ME, Birtwhistle, R, et al. PH1N1: a comparative analysis of public health responses in Ontario to the influenza outbreak, public health and primary care: lessons learned and policy suggestions. BMC Public Health. 2013; 13(1):687.
22. DSB. New influenza A virus (H1N1): a summary of a study on the national response in Norway. Tonsberg. 2011. Accessed May 25, 2018
23. Socialstyrelsen, A(H1N1) 2009: an evaluation of Sweden’s preparations for and management of the pandemic. Stockholm. 2011. Accessed May 25, 2018.
24. Hine, D. The 2009 influenza pandemic: an independent review of the UK response to the 2009 influenza pandemic. London. 2010. Accessed May 26, 2018.
25. WHO. Global survey on national vaccine deployment and vaccination plans for pandemic A(H1N1) 2009 vaccine – 2010: report of findings. Geneva. 2013. Accessed May 26, 2018.
26. EC. Assessment report on EU-wide pandemic vaccine strategies. Brussels. 2010. Accessed May 26, 2018.
27. HPA. Assessment report on the EU-wide response to pandemic (H1N1) 2009. London. 2010. Accessed May 26, 2018.
28. WHO. Recommendations for good practice in pandemic preparedness: identified through evaluation of the response to pandemic (H1N1) 2009. Copenhagen. 2010. Accessed May 28, 2018.
29. MEMA. After action report for the response to the 2013 Boston marathon bombings. Framingham (MA). 2014. Accessed May 28, 2018.
30. Goralnick, E, Halpern, P, Loo, S, et al. Leadership during the Boston marathon bombings: a qualitative after-action review. Disaster Med Public Health Prep. 2015;9(10):6.
31. Little, M, Cooper, J, Gope, M, et al. “Lessons learned”: a comparative case study analysis of an emergency department response to two burns disasters. Emerg Med Australas. 2012;24(4):420429.
32. Aylwin, CJ, König, TC, Brennan, NW, et al. Reduction in critical mortality in urban mass casualty incidents: analysis of triage, surge, and resource use after the London bombings on July 7, 2005. Lancet. 2006;368(9554):22192225.
33. OEM. After action report: Alfred P. Murrah Federal Building bombing 19 April 1995 in Oklahoma City, Oklahoma. Oklahoma (OK). 1995. Accessed May 28, 2018.
34. HSE. The Buncefield incident 11 December 2005: the final report of the major incident investigation board volume 2. Liverpool. 2008.
35. HSE. The Buncefield incident 11 December 2005: the final report of the major incident investigation board volume 1. Liverpool. 2008.
37. SQW. Buncefield social impact assessment: annex A key information sources. London. 2007.
38. Tapster, C. Buncefield: multi-agency debrief report and recommendations. Hertford. 2007.
39. Paltrinieri, N, Dechy, N, Salzano, E, et al. Lessons learned from Toulouse and Buncefield disasters: from risk analysis failures to the identification of atypical scenarios through a better knowledge management. Risk Anal. 2012;32(8):14041419.
40. Knox, CC. Analyzing after-action reports from Hurricanes Andrew and Katrina: repeated, modified, and newly created recommendations. J Emerg Manage. 2013;11(2):160168.
41. Brevard, SB, Weintraub, SL, Aiken, JB, et al. Analysis of disaster response plans and the aftermath of Hurricane Katrina: lessons learned from a level I trauma center. J Trauma Injury Infect Crit Care. 2008;65(5):11261132.
42. Terenzini, C. The report of the Blue Ribbon Committee on the water emergency of April 25-27, 2007 in Spencer, Massachusetts. Spencer (MA). 2007.
43. Adrot, A. Crisis response, organizational improvisation and the dispassionate communicative genre during the 2003 French heat wave. Paris. 2011.
45. SIESWE. Evaluation of an innovative method of assessment: critical incident analysis. Glasgow: Scottish Institute for Excellence in Social Work Education. 2005.
46. Berry, K, Krizek, B. Root cause analysis in response to a “near miss.” J Healthc Qual. 2000;22(2):1618.
47. Iedema, R, Jorm, C, Braithwaite, J. Managing the scope and impact of root cause analysis recommendations. J Health Organ Manag. 2008;22(6):569585.
48. Singleton, CM, Debastiani, S, Rose, D, et al. An analysis of root cause identification and continuous quality improvement in public health H1N1 after-action reports. J Public Health Manage Pract. 2014;20(2):197204.
49. Aledort, JE, Lurie, N, Ricci, KA, et al. Facilitated look-backs: a new quality improvement tool for management of routine annual and pandemic influenza. RAND Corporation. 2006.
50. OFSTED. Learning lessons from serious case reviews 2009-2010: Ofsted’s evaluation of serious case reviews from 1 April 2009 to 31 March 2010. The Office for Standards in Education, Children’s Services and Skills. 2010.
51. Savoia, E, Agboola, F, Biddinger, PD. Use of after action reports (AARs) to promote organizational and systems learning in emergency preparedness. Int J Environ Res Public Health. 2012;9:14.