Hostname: page-component-848d4c4894-sjtt6 Total loading time: 0 Render date: 2024-07-05T13:35:14.660Z Has data issue: false hasContentIssue false

Let's Keep Looking for Other Roads: Improving Approaches to Identifying and Addressing Key Drivers

Published online by Cambridge University Press:  29 June 2017

Matt C. Howard*
Affiliation:
The University of South Alabama, Mitchell College of Business
*
Correspondence concerning this article should be addressed to Dr. Matt C. Howard, 5811 USA Drive S., Rm. 346, Mitchell College of Business, University of South Alabama, Mobile, AL 36688. E-mail: MHoward@SouthAlabama.edu
Rights & Permissions [Opens in a new window]

Extract

Cucina, Walmsley, Gast, Martin, and Curtin (2017) provide several valid criticisms of survey key driver analysis (SKDA), which is an approach used to identify causes of important employee and organizational outcomes, such as job satisfaction or employee engagement.

Type
Commentaries
Copyright
Copyright © Society for Industrial and Organizational Psychology 2017 

Cucina, Walmsley, Gast, Martin, and Curtin (Reference Cucina, Walmsley, Gast, Martin and Curtin2017) provide several valid criticisms of survey key driver analysis (SKDA), which is an approach used to identify causes of important employee and organizational outcomes, such as job satisfaction or employee engagement. The authors then propose a new approach that consists of the following four steps:

  • Step 1: Identify targets for intervention

  • Step 2: Develop hypotheses to explain survey results

  • Step 3: Design organizational interventions

  • Step 4: Implement the intervention

When reading their new approach, I could not help but think they were reinventing the wheel, except they were replacing a triangular wheel with a square wheel. That is, the authors were certainly not misguided in providing suggestions beyond SKDA, but I felt that they may have overlooked many best practices of relevant research domains. In the following, I provide some suggestions for methods to improve their approach. My suggestions may not create a circular wheel, but they further the rounding process.

Step 1: Identify Targets for Intervention

To identify intervention targets, Cucina et al. (Reference Cucina, Walmsley, Gast, Martin and Curtin2017) suggest that practitioners should compare organizations’ responses on individual items and scales to identify those that scored substantially lower than others. Although a valuable approach, this suggestion overlooks prior research on organizational needs assessment (Barbazette, Reference Barbazette2006; Cekada, Reference Cekada2010).

McClelland (Reference McClelland1993) presents an 11-step approach for conducting a needs assessment, which begins with defining assessment goals, includes reviewing and selecting assessment methods and instruments, and ends with presenting findings and recommendations. Brown (Reference Brown2002) provides a four-step approach that begins with gathering data using multiple methods to identify needs, and it ends with transitioning the assessment outcomes into intervention creation. The classic work of McGehee and Thayer (Reference McGehee and Thayer1961) suggests that needs analyses should contain three facets: organization, operations, and person analyses. The suggestions of Cucina et al. to identify intervention targets include barely any of these steps or facets.

Although practitioners should review these articles and others, I suggest that a particular focus should be placed on needs assessments regarding the questions of why, who, how, when, and what (Barbazette, Reference Barbazette2006; Cekada, Reference Cekada2010), which can address the following uncertainties:

  • Why: Why should we perform an intervention? Is there a concern? Is the benefit of the intervention greater than its cost?

  • Who: Who should be included in the intervention? Who is at risk and who is already in need?

  • How: How can the concern be fixed? Is an intervention the best approach? Or is broader organizational change needed?

  • When: When should an intervention be applied? Does the concern change over time?

  • What: What else is needed before identifying key drivers and developing hypotheses?

Of these five questions, the suggestions of Cucina et al. only focuses on “why” and “who.” If needs analyses are not performed in a comprehensive and systematic manner, then practitioners may identify key drivers and develop interventions for organizational concerns that are not best addressed through an intervention—or may not actually exist at all.

Step 2: Develop Hypotheses to Explain Survey Results

To generate hypotheses about the causes of low scores on outcomes, Cucina et al. suggest that practitioners review the industrial and organizational (I-O) psychology literature, benchmark the organization's policies and practices against similar organizations, and/or conduct qualitative research. The authors do not suggest the use of quantitative methods, perhaps in an effort to distance their approach from SKDA; however, I believe that this is an oversight. Many quantitative methods can be used to develop hypotheses in conjunction with the suggestions of Cucina et al.

Perhaps the most relevant is causal network analysis (Lunt, Reference Lunt1988, Reference Lunt1991; Lydon, Howard, Wilson, & Geier, Reference Lydon, Howard, Wilson and Geier2015; White, Reference White1995). Network analysis is a method to depict the links between certain entities and/or concepts. When applied to causal beliefs, network analysis is able to gauge the perceived causal structure of participants regarding a particular outcome. For instance, Lunt (Reference Lunt1988) used network analysis to identify the perceived causal structure of students’ examination failure, discovering that students perceived poor time allotment and poor concentration as more proximal factors to exam failure than limited intelligence or biased teachers. The same analysis could be applied to develop hypotheses about the causes of low scores on outcomes of interest.

To perform a causal network analysis, participants are asked to indicate the strength of all one-directional relationships between certain entities and/or concepts using a provided scale. For instance, a researcher could provide participants with an 8 × 8 matrix with each of eight entities or concepts representing a respective column and row. Participants could be asked to use a one (does not predict at all) to seven (completely predicts) scale to indicate the extent that each entity/concept listed in the row predicts the entity/concept listed in the column. Then, participants’ responses could be analyzed to develop a causal network of the provided entities/concepts, and relationships that meet a certain threshold could be included in the final causal network. To demonstrate the utility of causal network analysis, Figure 1 includes an example perceived causal structure of job satisfaction that is similar to actual results that could be obtained via the method.

Figure 1. Example perceived causal results of job satisfaction.

Two further notes should be made about causal network analysis. The method can be applied to relatively small sample sizes, which may cause concern with SKDA. White (Reference White1995) used causal network analysis with a sample of 50 participants, Lunt (Reference Lunt1991) used a sample of 44 participants, and Lunt (Reference Lunt1988) used a sample of 23 participants. Also, the entities and/or concepts must be provided by the practitioner. For this reason, it is essential for practitioners to perform an initial literature review or even a qualitative investigation to identify a set of entities and/or concepts to include. This may be a benefit of the approach, however. Forcing practitioners to perform an initial literature review or qualitative study ensures the triangulation of methods, which would aid in ensuring the validity of results through multiple approaches.

Also, it may be best to not “throw the baby out with the bathwater” and discard SKDA entirely. Even flawed methods can provide incremental information when used in conjunction with other approaches, and SKDA could still be used to identify certain causes of low outcome scores. For this reason, I emphasize that a greater focus should be placed on the triangulation of approaches when developing hypotheses, such as the simultaneous application of prior theory, prior praxis, qualitative methods, and quantitative methods. If convergence can be obtained with multiple methods, then we can be more certain in the accuracy of our results.

Step 3: Design Organizational Interventions

Cucina et al. (Reference Cucina, Walmsley, Gast, Martin and Curtin2017) did not provide any suggestions for designing organizational interventions, which is perhaps the most important aspect of their entire approach. I found this unfortunate, as many important developments in intervention creation have been made fairly recently—many occurring in the training literature (Gully & Chen, Reference Gully, Chen, Kozlowski and Salas2010; Howard & Jacobs, Reference Howard and Jacobs2016).

In the traditional approach, an intervention is created and subsequently evaluated through a randomized confirmatory trial, which places an almost sole focus on the intervention evaluation process rather than the intervention creation process. It also forces practitioners to analyze the entire omnibus intervention rather than the individual components that compose the intervention. Howard and Jacobs (Reference Howard and Jacobs2016) recently detailed two sophisticated approaches to design an intervention—the multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART)—which overcome both of these concerns.

MOST includes four primary phases. The first is the Creation phase, in which the initial intervention is created. The second is the Screening phase, in which each component of the intervention is tested through a full or fractional factorial design. Effective components are retained, and ineffective components are removed. If no components are retained, then the process is restarted. The third phase is the Refining phase, in which the optimal delivery of each component is tested. For example, if a component is a group discussion, a 2-hour group discussion may be compared against a 1-hour group discussion. The fourth phase is the Confirming phase, in which the optimal delivery of each retained component is tested together against an alternative group to ensure the overall effectiveness of the intervention. Through this process, each individual component of the intervention is tested and possibly modified before the entire intervention is tested together, resulting in a greater focus on the intervention creation process rather than the intervention evaluation process.

Alternatively, SMART is used to create a time-varying adaptive intervention. To do so, participants are randomly assigned to an initial intervention component. Then, they are often evaluated on a particular characteristic, such as motivation, but it is not used to determine their subsequent assignment to a component. Instead, participants are randomly assigned to a second intervention component. The process is repeated until all components of interest are included. From the results of this process, the best sequencing of intervention components can be determined. Perhaps more importantly, interactions between certain components and participant characteristics can be identified. For instance, those with low motivation may be resistant to certain components, and these participants may need an intensive component to initiate progress. Once the process is complete, the finalized time-varying adaptive intervention can be tested in a traditional randomized confirmatory trial to ensure its overall effectiveness. Like MOST, SMART also places a greater focus on the intervention creation process through testing individual components before the final omnibus comparison.

Both of these methods have already been applied extensively to create public health interventions (Almirall, Compton, Gunlicks-Stoessel, Duan, & Murphy, 2012; Collins, Murphy, & Strecher, Reference Collins, Murphy and Strecher2007), and Howard and Jacobs (Reference Howard and Jacobs2016) provide several suggestions regarding relevant areas of industry and government that are prime for the application of MOST and SMART. Although MOST and SMART are effective methods to develop an intervention, they only represent two of many approaches. I hope that future practice applies these more sophisticated methods to develop effective interventions.

Step 4: Implement the Intervention

Cucina et al. (Reference Cucina, Walmsley, Gast, Martin and Curtin2017) suggest using a two-group, posttest design to test the effectiveness of a created intervention, which can be considered a randomized confirmatory trial. The authors’ evaluation method was derived from the classic suggestions of Cook, Campbell, and Day (Reference Cook, Campbell and Day1979) but they ignore developments in experimental and quasi-experimental designs of the past 40 years—many again occurring in the training literature (Cascio & Aguinis, Reference Cascio and Aguinis2005; Shadish, Cook, & Campbell, Reference Shadish, Cook and Campbell2002).

Sackett and Mullen (Reference Sackett and Mullen1993) drew attention to the two most common questions in evaluating an intervention: (a) “How much change has occurred?” and (b) “Has a target outcome level been reached?” If the former is of interest, then practitioners can apply a two-group design. If the latter is of interest, practitioners can suffice with a single-group design. For example, if practitioners simply wish to create an intervention that results in 95% of employees being satisfied with their jobs, then a comparison group is not required to determine whether 95% of employees are satisfied with their jobs after the intervention. Of course, this method has various concerns, but it can provide importance inferences when needed.

Whether using these one- or two-group designs, Cascio and Aguinis (Reference Cascio and Aguinis2005) provide seven possible research designs to test the effectiveness of an intervention. These experimental and quasi-experimental designs differ on the number of measurement occasions and experimental groups, allowing each design to address certain possible biasing factors when drawing conclusions. More importantly, these seven designs draw attention to the many additional methods available to test the effectiveness of a created intervention beyond the two-group posttest design.

Furthermore, many other innovative methods have been created to evaluate an intervention. Certain formative evaluations can be used to provide initial support for the effectiveness of an intervention with extremely small sample sizes, such as three participants (Brown & Gerhardt, Reference Brown and Gerhardt2002; Saroyan, Reference Saroyan1992), and Yang, Sackett, and Arvey (Reference Yang, Sackett and Arvey1996) provide several suggestions regarding methods to evaluate training programs while reducing organizational costs—a concern that was not noted by Cucina et al. (Reference Cucina, Walmsley, Gast, Martin and Curtin2017). Regardless of the methodological approach, practitioners should always strive to apply multiple measures with satisfactory psychometric properties and demonstrated validity. In doing so, the multidimensional and hierarchical nature of many personal and organizational outcomes should be recognized, which would result in more accurate inferences about the effectiveness of an intervention.

Conclusion

Cucina et al. (Reference Cucina, Walmsley, Gast, Martin and Curtin2017) should be applauded for drawing attention to current methodological approaches of I-O psychology practitioners. SKDA has notable concerns. To improve beyond prior approaches, however, best practices that have already been identified by prior authors should be incorporated into new approaches. There is no need to reinvent the wheel. Or, at least, there is no need to reinvent the wheel from all new parts. I hope that practitioners adopt the approaches suggested in the current article, and I hope that future researchers continue to develop the methods and processes detailed by Cucina et al. as well as those proposed in the current commentary.

References

Almirall, D., Compton, S. N., Gunlicks‐Stoessel, M., Duan, N., & Murphy, S. A. (2012). Designing a pilot sequential multiple assignment randomized trial for developing an adaptive treatment strategy. Statistics in Medicine, 31 (17), 18871902.Google Scholar
Barbazette, J. (2006). Training needs assessment: Methods, tools, and techniques (Vol. 1). San Francisco, CA: John Wiley & Sons.Google Scholar
Brown, J. (2002). Training needs assessment: A must for developing an effective training program. Public Personnel Management, 31 (4), 569578.CrossRefGoogle Scholar
Brown, K. G., & Gerhardt, M. W. (2002). Formative evaluation: An integrative practice model and case study. Personnel Psychology, 55 (4), 951983.CrossRefGoogle Scholar
Cascio, W. F., & Aguinis, H. (2005). Applied psychology in human resource management. Essex, UK: Pearson.Google Scholar
Cekada, T. L. (2010). Training needs assessment: Understanding what employees need to know. Professional Safety, 55 (3), 2833.Google Scholar
Collins, L. M., Murphy, S. A., & Strecher, V. (2007). The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): New methods for more potent eHealth interventions. American Journal of Preventive Medicine, 32 (5), S112–S118.Google Scholar
Cook, T. D., Campbell, D. T., & Day, A. (1979). Quasi-experimentation: Design & analysis issues for field settings (Vol. 351). Boston: Houghton Mifflin.Google Scholar
Cucina, J. M., Walmsley, P. T., Gast, I. F., Martin, N. R., & Curtin, P. (2017). Survey key driver analysis: Are we driving down the right road? Industrial and Organizational Psychology: Perspectives on Science and Practice, 10 (2), 234257.CrossRefGoogle Scholar
Gully, S., & Chen, G. (2010). Individual differences, attribute-treatment interactions, and training outcomes. In Kozlowski, S. W. J. & Salas, E. (Eds.), Learning, training, and development in organizations (pp. 364). New York: Routledge/Taylor & Francis.Google Scholar
Howard, M. C., & Jacobs, R. R. (2016). The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): Two novel evaluation methods for developing optimal training programs. Journal of Organizational Behavior, 37 (8), 12461270.Google Scholar
Lunt, P. K. (1988). The perceived causal structure of examination failure. British Journal of Social Psychology, 27 (2), 171179.Google Scholar
Lunt, P. K. (1991). The perceived causal structure of loneliness. Journal of Personality and Social Psychology, 61 (1), 2634.Google Scholar
Lydon, D. M., Howard, M. C., Wilson, S. J., & Geier, C. F. (2015). The perceived causal structures of smoking: Smoker and non-smoker comparisons. Journal of Health Psychology, 21 (9), 20422051. doi: 10.1177/1359105315569895 Google Scholar
McClelland, S. B. (1993). Training needs assessment: an “open-systems” application. Journal of European Industrial Training, 17 (1), 1217.Google Scholar
McGehee, W., & Thayer, P. W. (1961). Training in business and industry. New York: John Wiley & Sons.Google Scholar
Sackett, P. R., & Mullen, E. J. (1993). Beyond formal experimental design: Towards an expanded view of the training evaluation process. Personnel Psychology, 46 (3), 613627.CrossRefGoogle Scholar
Saroyan, A. (1992). Differences in expert practice: A case from formative evaluation. Instructional Science, 21 (6), 451472.Google Scholar
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton, Mifflin and Company.Google Scholar
White, P. A. (1995). Common‐sense construction of causal processes in nature: A causal network analysis. British Journal of Psychology, 86 (3), 377395.CrossRefGoogle Scholar
Yang, H., Sackett, P. R., & Arvey, R. D. (1996). Statistical power and cost in training evaluation: Some new considerations. Personnel Psychology, 49 (3), 651668.Google Scholar
Figure 0

Figure 1. Example perceived causal results of job satisfaction.