To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To save this article to your Kindle, first ensure email@example.com is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The goal of focal articles in Industrial and Organizational Psychology: Perspectives on Science and Practice is to present new ideas or different takes on existing ideas and stimulate a conversation in the form of comment articles that extend the arguments in the focal article or that present new ideas stimulated by those articles. The two focal articles in this issue stimulated a wide range of reactions and a good deal of constructive input.
Situational judgment tests (SJTs) are typically conceptualized as contextualized selection procedures that capture candidate responses to a set of relevant job situations as a basis for prediction. SJTs share their sample-based and contextualized approach with work samples and assessment center exercises, although they differ from these other simulations by presenting the situations in a low-fidelity (e.g., written) format. In addition, SJTs do not require candidates to respond through actual behavior because they capture candidates’ situational judgment via a multiple-choice response format. Accordingly, SJTs have also been labeled low-fidelity simulations. This SJT paradigm has been very successful: In the last 2 decades, scientific interest in SJTs has grown, and they have made rapid inroads in practice as attractive, versatile, and valid selection procedures. Contrary to their popularity and the voluminous research on their criterion-related validity, however, there has been little attention to developing a theory of why SJTs work. Similarly, in SJT development, often little emphasis is placed on measuring clear and explicit constructs. Therefore, Landy (2007) referred to SJTs as “psychometric alchemy” (p. 418).
Whereas Lievens and Motowidlo (2016) propose a model of situational judgment test (SJT) performance that removes the “situation” in favor of conceptualizing SJTs as a measure of general domain knowledge, we argue that the expression of general domain knowledge is in fact contingent on situational judgment. As we explain, the evidence cited by Lievens and Motowidlo against a situational component does not inherently exclude the importance of situations from SJTs and does overlook the strong support for a person–situation interaction explanation of behavior. Based on the interactionist literature—in particular, the trait activation theory (TAT) and situational strength literatures—we propose a model that both maintains the key pathways and definitions posited by Lievens and Motowidlo and integrates the situational component of SJTs.
In their focal article, Lievens and Motowidlo (2016) consider procedural knowledge about effective actions in work situations as the key component of their theory of situational judgment tests (SJTs). In our commentary we want to suggest that situational judgment should nevertheless not be neglected in such a theory.
Although we echo Lievens and Motowidlo's (2016) view that situational judgment test (SJT) research should subscribe to the construct-driven approach, we disagree with their argument on two counts. First, we question whether measuring general domain knowledge represents the only way to advance SJT research. Second, we question whether it is appropriate to downplay the importance of situations in SJTs. In this commentary, we first briefly review construct-driven SJT studies and then share our own experience in developing an SJT for integrity in China using the construct-driven approach. Based on the review and reflection, we come to two major conclusions: (a) construct-driven SJT research has progressed well so far without the reconceptualization of SJTs as measures of general domain knowledge, and (b) specific situations are an important feature of SJTs that should not yet be dismissed.
What is the role of the situation in situational judgment tests (SJTs)? Lievens and Motowidlo (2016) assert that SJTs are somewhat of a misnomer because they do not actually measure how individuals would behave in a given situation per se. According to these researchers, SJTs assess general domain knowledge—whether potential employees recognize the “utility of expressing certain traits” (p. 4). As a result, SJTs map onto personality measures, which are a summary of behavior across time and situations. SJTs provide predictive validity in part because they tap into personality. However, rather than renaming SJTs, it is possible to reintroduce the concept of a situation to provide even greater predictive power. Thus, the goals of this commentary are to (a) clarify what constitutes a situation, (b) describe what SJTs might actually measure, and (c) set forth a path for a taxonomy of workplace situations.
Although Lievens and Motowidlo (2016) made a strong case for reconceptualizing situational judgment tests (SJTs) as measures of general domain knowledge, we disagree with their view that the judgment or assessment of the situation itself is not important. We contend that situation assessment is an integral yet ignored factor in SJTs and that both general domain knowledge and situation assessment are needed to better understand how SJTs work.
The construct validity of situational judgment tests (SJTs) is a “hot mess.” The suggestions of Lievens and Motowidlo (2016) concerning a strategy to make the constructs assessed by an SJT more “clear and explicit” (p. 5) are worthy of serious consideration. In this commentary, we highlight two challenges that will likely need to be addressed before one can develop SJTs with clear and explicit constructs. We also offer critiques of four positions presented by Lievens and Motowidlo that are not well supported by evidence.
Lievens and Motowidlo (2016) argue compellingly that situational judgment tests (SJTs) measure job-relevant general domain knowledge, conceptualized as implicit trait policies (ITPs). ITPs are defined as a person's knowledge about the utility of expressing certain traits. They develop through the feedback a person receives when acting in accordance with their trait profiles in different environments (work, life, leisure). Positive feedback reinforces the knowledge that behavior in accordance with one's own traits is appropriate, and negative feedback reinforces the knowledge that an approach that differs from one's trait tendencies may be more effective. As such, ITPs represent a person's knowledge about the effectiveness of behaviors across a variety of contexts.
The situational judgment test (SJT) development procedures outlined by the authors of the focal piece (Lievens & Motowidlo, 2016) provide an excellent framework to design SJTs that help answer fundamental questions about what SJTs measure and why they work. This article expands on this framework to explore further some of the issues faced in the development of SJTs. These issues include the implied assumption of linearity between general domain knowledge and effectiveness, whether the SJT measures a single construct or multiple constructs, and when a more criterion-centered approach to SJT development might be preferred.
Lievens and Motowidlo (2016) present a convincing case for why situational judgment tests (SJTs) should be developed specifically to measure general domain knowledge, but I have two concerns regarding the use of SJTs in psychological research and practical settings if the reconceptualization of SJTs offered by the authors is adopted to the exclusion of other current approaches. The first concerns abandoning SJTs as a helpful job analysis tool if we encourage intentional conflation of trait expression with job effectiveness in the development of SJT items. Second, diluting the situational component of SJTs may reduce their acceptance as selection tools by job applicants and practitioners in organizations.
Lievens and Motowidlo (2016) addressed three of the most important unanswered questions regarding situational judgment tests (SJTs): (a) Should we view them as tests that can assess relatively generic constructs that predict performance across settings, (b) what constructs can they assess, and (c) how should they be scored? They suggested fundamentally changing the SJT development process by targeting the specific constructs we measure, using scoring systems that address both the targeted traits and their situational effectiveness, examining construct validity, and evaluating the criterion-related validity of SJT traits (Lievens & Motowidlo, pp. 11–12). These recommendations are highly significant on both practical and conceptual grounds.
Situational judgment tests (SJTs) occasionally fail to predict job performance in criterion-related validation studies, often despite much effort to follow scholarly recipes for their development. This commentary provides some plausible explanations for why this may occur as well as some tips for SJT development. In most cases, we frame the issue from an implicit trait policy (ITP) perspective (Motowidlo, Hooper, & Jackson, 2006a, 2006b) and the measurement of general domain knowledge. In other instances, we believe that the issue does not have a direct tie to the ITP concept, but our experience suggests that the issue is of sufficient importance to include in this response. The first two issues involve challenges gathering validity evidence to support the use of SJTs, and the remaining issues deal more directly with SJT design considerations.
Lievens and Motowidlo (2016) present a case for situational judgment tests (SJTs) to be conceptualized as measures of general domain knowledge, which the authors define as knowledge of the effectiveness of general domains such as integrity, conscientiousness, and prosocial behaviors in different jobs. This argument comes from work rooted in the use of SJTs as measures of implicit trait policies (Motowidlo & Beier, 2010; Motowidlo, Hooper, & Jackson, 2006), measured with a format described as a “single response SJT” (Kell, Motowidlo, Martin, Stotts, & Moreno, 2014; Motowidlo, Crook, Kell, & Naemi, 2009). Given evidence that SJTs can be used as measures of general domain knowledge, the focal article concludes with a suggestion that general knowledge can be measured not only by traditional text-based or paper-and-pencil SJTs but also through varying alternate formats, including multimedia SJTs and interactive SJTs.
In this article, we demonstrate that samples in the industrial and organizational (I-O) psychology literature do not reflect the labor market, overrepresenting core, salaried, managerial, professional, and executive employees while underrepresenting wage earners, low- and medium-skill first-line personnel, and contract workers. We describe how overrepresenting managers, professionals, and executives causes research about these other workers to be suspect. We describe several ways that this underrepresentation reduces the utility of the I-O literature and provide specific examples. We discuss why the I-O literature underrepresents these workers, how it contributes to the academic–practitioner gap, and what researchers can do to remedy the issue.
The focal article by Bergman and Jean (2016) raises an important issue by documenting the underrepresentation of nonprofessional and nonmanagerial workers in industrial and organizational (I-O) research. They defined workers as, “people who were not executive, professional or managerial employees; who were low- to medium-skill; and/or who were wage earners rather than salaried” (p. 89). This definition encompasses a wide range of employee samples: from individuals working in blue-collar skilled trades like electricians and plumbers to police officers, soldiers, and call center representatives to low-skill jobs such as fast food, tollbooth operators, and migrant day workers. Because there is considerable variability in the pay, benefits, skill level, autonomy, job security, schedule flexibility, and working conditions that define these workers’ experiences, a more fine-grained examination of who these workers are is necessary to understand the scope of the problem and the specific subpopulations of workers represented (or not) in existing I-O research.
Bergman and Jean's (2016) focal article decries the limited research attention of industrial and organizational (I-O) psychologists on “workers”—that is, employees such as wage earners, frontline workers, and contractors, who do not fill professional, managerial, or executive positions. We agree. In addition to the scientific and moral benefits of studying workers, there is a practical imperative. An academic discipline that comes across as being disinterested in workers may leave itself open to charges of being the “handmaiden” of management (Hulin, 2002, p. 12). Moreover, such an academic discipline may be ill prepared to provide evidence-based contributions to important societal debates on topics such as income inequality and immigration.
Two recent focal articles in this journal have addressed issues related to sample selection and generalizability of results (Bergman & Jean, 2016; Landers & Behrend, 2015). If Bergman and Jean are correct, gone are the days of the Hawthorne studies in which research focused on the majority of the human workforce: the working class. Instead, researchers are allegedly two to three times as likely to exclusively sample managers as they are to exclusively sample workers. Assuming this is true, Bergman and Jean are correct to address why this occurs and how it may impact the field. However, there are two critical issues that must be considered alongside these questions: ongoing changes in how work is conducted and temporal trends in work. A consideration of these issues should yield additional insights that may supplement the recommendations made by Bergman and Jean.
Bergman and Jean (2016) have contributed an important essay to the continuing self-reflection and maturation of the field of industrial–organizational (I-O) psychology—or as it is known in much of the world outside the United States, work psychology.1 They clearly and adequately document that the field has relatively neglected to study the world of (largely lower-level) workers who are not managers, executives, professionals, or students and that this has affected adversely the validity of our science and the relevance of our professional practice in a number of not-so-intuitively obvious ways. But as critical as those observations are, I believe the most important aspect of their piece has to do with the inferences they offer as to why our published literature is so skewed. They suggest six potential, not mutually exclusive, explanations, including the possibility of personal biases among I-O psychologists. However, before focusing on those explanations, it should be informative to place the Bergman/Jean thesis in context. There is a growing, recent body of critical evidence and/or commentary concerning this and similar issues—although less consideration generally has been given to their likely causes.
Few would dispute that the nature of work, and the workers who perform it, has evolved considerably in the 70 years since the founding of the Society for Industrial and Organizational Psychology (SIOP) as the American Psychological Association's (APA's) Division 14, focused on industrial, business, and organizational psychology. Yet, in many ways the populations sampled in industrial–organizational (I-O) psychology research have failed to keep pace with this evolution. Bergman and Jean (2016) highlight how I-O research samples underrepresent (relative to the labor market) low- or medium-skill workers, wage earners, and temporary workers, resulting in a body of science that is overly focused on salaried, professional managers and executives. Though these discrepancies in the nature of individuals’ work and employment are certainly present and problematic in organizational research, one issue that should not be overlooked is that of adequately representing nationality and culture in I-O research samples.