To save this undefined to your undefined account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your undefined account.
Find out more about saving content to .
To save this article to your Kindle, first ensure firstname.lastname@example.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The goal of focal articles in Industrial and Organizational Psychology: Perspectives on Science and Practice is to present new ideas or different takes on existing ideas and stimulate a conversation in the form of comment articles that extend the arguments in the focal article or that present new ideas stimulated by those articles. The two focal articles in this issue stimulated a wide range of reactions and a good deal of constructive input.
Sampling strategy has critical implications for the validity of a researcher's conclusions. Despite this, sampling is frequently neglected in research methods textbooks, during the research design process, and in the reporting of our journals. The lack of guidance on this issue often leads reviewers and journal editors to rely on simple rules of thumb, myth, and tradition for judgments about sampling, which promotes the unnecessary and counterproductive characterization of sampling strategies as universally “good” or “bad.” Such oversimplification, especially by journal editors and reviewers, slows the progress of the social sciences by considering legitimate data sources to be categorically unacceptable. Instead, we argue that sampling is better understood in methodological terms of range restriction and omitted variables bias. This considered approach has far-reaching implications because in industrial–organizational (I-O) psychology, as in most social sciences, virtually all of the samples are convenience samples. Organizational samples are not gold standard research sources; instead, they are merely a specific type of convenience sample with their own positive and negative implications for validity. This fact does not condemn the science of I-O psychology but does highlight the need for more careful consideration of how and when a finding may generalize based on the particular mix of validity-related affordances provided by each sample source that might be used to investigate a particular research question. We call for researchers to explore such considerations cautiously and explicitly both in the publication and in the review of research.
The focal article of Landers and Behrend (2015) persuasively argues that universally condemning potential convenience data sources outside of traditional industrial–organizational (I-O) samples such as college students and organization samples is misguided. This author agrees that instead we need to consider the context, strengths, and weaknesses of more recently recognized potential data sources. This commentary will focus on understanding the context of one particular potential data source, Amazon's Mechanical Turk (MTurk; https://www.mturk.com/). While some existing research has looked at the demographic characteristics of Amazon MTurk workers and how those workers’ answers compared with more traditional samples (Casler, Bickel, & Hackett, 2013; Goodman, Cryder, & Cheema, 2013; Paolacci & Chandler, 2014), for this commentary I decided to take a primarily different tack. For the space of approximately 50 days, I acted as an MTurk worker on the site and participated in online communities at which MTurk workers congregate. The purposes of this were to experience the MTurk worker environment firsthand and observe how MTurk workers interact with each other and the site. This was done in the spirit of participant-observer research. Stanton and Rogelberg (2002) argue that online communities might be a particularly fruitful avenue for such participant-observer research within the field of I-O psychology. I am quick to note here that I don't see my efforts here as anywhere near as extensive as much of the participant-observer work of the past, and I did my time on MTurk in the spirit of such work rather than as a match for their methodological and analytical rigor. The observations I make in this commentary will be couched in my own experiences as well as the existing literature base on Amazon MTurk.
We are in almost full agreement with Landers and Behrend's (2015) thoughtful and balanced critiques of various convenience sampling strategies focusing on the four most frequently used data sources in our field. In this commentary, we expand on Landers and Behrend's discussions specifically around Mechanical Turk (MTurk) by providing further supporting voice and/or clarity to the four potential concerns and relative advantages associated with MTurk. We also raise a few additional concerns and challenges to which the current literature does not yet offer definitive answers. We conclude with some practical guidelines summarizing the relative advantages and unique challenges of using MTurk.
Landers and Behrend (2015) present yet another attempt to limit reviewer and editor reliance on surface characteristics when evaluating the generalizability of study results (see also Campbell, 1986; Dipboye & Flanagan, 1979; Greenberg, 1987; Highhouse, 2009; Highhouse & Gillespie, 2009). Most of the earlier treatments of sample generalizability, however, have focused on the use of college students in (mostly) laboratory studies. Many industrial–organizational (I-O) scholars have experienced the hostility with which studies using students as participants receives. For instance, Jen Gillespie and I observed, “Reviewers and editors commonly assert that students should not be used to study workplace phenomena as though such a declaration requires no further explanation” (Highhouse & Gillespie, 2009, p. 247). The difference this time, however, is that Landers and Behrend (2015) are reacting to dismissals of research using Mechanical Turk (MTurk) workers to make inferences about behavior in organizations. Landers and Behrend (2015) make the important point that any research population is likely to be atypical on some dimensions and that all samples are samples of convenience (see also Oakes, 1972). We agree. Furthermore, we make two observations about MTurk: (a) We believe that it should be met with less resistance than student samples have historically faced, and (b) we suggest that it provides a unique opportunity to bring back randomized experimentation in I-O psychology.
Landers and Behrend (2015) are the most recent in a long line of researchers who have suggested that online samples generated from sources such as Amazon's Mechanical Turk (MTurk) are as good as or potentially even better than the typical samples found in psychology studies. It is important that the authors caution that researchers and reviewers need to carefully reflect on the goals of research when evaluating the appropriateness of samples. However, although they argue that certain types of samples should not be dismissed out of hand, they note that there is only scant evidence demonstrating that online sources can provide usable data for organizational research and that there is a need for further research evaluating the validity of these new sources of data. Because the target article does not directly address the potential problems with such samples, we will review what is known about collecting online data (with a particular focus on MTurk) and illustrate some potential problems using data derived from such sources.
In their focal article, Landers and Behrend (2015) propose to reevaluate the legitimacy of using the so-called convenience samples (e.g., crowdsourcing, online panels, and student samples) as compared with traditional organizational samples in industrial–organizational (I-O) psychology research. They suggest that such sampling strategies should not be judged as inappropriate per se but that decisions to accept or reject such samples must be empirically or theoretically justified. I concur with Landers and Behrend's call for a more nuanced view on convenience samples. More precisely, I suggest that we should not “throw the baby out with the bathwater” but rather carefully and empirically examine the advantages and risks associated with using each sampling strategy before classifying it as suitable or not.
The focal article by Landers and Behrend (2015) makes the case that samples collected on microtask websites like Amazon's Mechanical Turk (MTurk) are inherently no better or worse than traditional samples of convenience from university students or organizations. We wholeheartedly agree. However, having successfully used MTurk and other online sources for data collection, we feel that the focal article was insufficient regarding the caution required in identifying inattentive respondents and the problems that can arise if such individuals are not removed from the dataset. Although we focus on MTurk, similar issues arise for most “low-stakes” assessments, including student samples, which seem to be increasingly collected online.
Landers and Behrend (2015) question organizational researchers’ stubborn reliance on sample source to infer the validity of research findings, and they challenge the arbitrary distinctions researchers often make between sample sources. Unconditional favoritism toward particular sampling strategies (e.g., organizational samples) can restrict choices in methodology, which in turn may limit opportunities to answer certain research questions. Landers and Behrend (2015) contend that no sampling strategy is inherently superior (or inferior), and therefore, all types of samples warrant careful consideration before any validity-related conclusions can be made. Despite sound arguments, the focal article focuses its consideration on external validity and deemphasizes the potential influence of sample source on internal validity. Agreeing with the position that no samples are the “gold standard” in organizational research and practice, we focus on insufficient effort responding (IER; Huang, Curran, Keeney, Poposki, & DeShon, 2012) as a threat to internal validity across sample sources.
Landers and Behrend (2015) make a good argument that more consideration should be given to sampling strategies in light of the specific research question prior to data collection and that nonorganizational samples should not be automatically dismissed by journal editors and reviewers. Yet, the authors only briefly mention one particular issue that is also relevant to the validity of our research findings—participant motivation. Researchers should seek to better understand why individuals choose to participate in a study and what may be motivating the levels of effort they put forth in participating. Two critical questions include Are participants who they say they are (e.g., working adults)? And, are participants paying attention to the study instructions and questions and participating with effort? In this response, I expand on issues related to participant motivation and apply them to the sampling strategies discussed by Landers and Behrend (2015). I also provide suggestions for ways researchers may address these issues.
Here, we expand on Landers and Behrend's (2015) discussion of the external validity of convenience samples. In particular, we note that their focal article failed to mention one important limitation of multi-organization convenience samples (e.g., MTurk samples, student samples): Multi-organization convenience samples tend to confound levels of analysis, which affects the external validity of these samples. Specifically, between-organizations phenomena (i.e., organization-level) and within-organizations phenomena (i.e., individual-level) are distinct and separable (Ostroff, 1993; Robinson, 1950). Unfortunately, multi-organization samples such as those found in MTurk or MBA student samples can confound these two sets of phenomena. The current commentary uses a levels-of-analysis framework to expand on Landers and Behrend's discussion of what external validity is, and then the commentary illustrates how the diversity of convenience samples can actually harm external validity under some common circumstances.
We agree with Landers and Behrend's (2015) proposition that Amazon's Mechanical Turk (MTurk) may provide great opportunities for organizational research samples. However, some groups are characteristically difficult to recruit because they are stigmatized or socially disenfranchised (Birman, 2005; Miller, Forte, Wilson, & Greene, 2006; Sullivan & Cain, 2004; see Campbell, Adams, & Patterson, 2008, for a review). These groups may include individuals who have not previously been the focus of much organizational research, such as those of low socioeconomic status; individuals with disabilities; lesbian, gay, bisexual, or transgender (LGBT) individuals; or victims of workplace harassment. As Landers and Behrend (2015) point out, there is an overrepresentation of research using “Western, educated, industrialized, rich, and democratic” participants. It is important to extend research beyond these samples to examine workplace phenomena that are specific to special populations. We contribute to this argument by noting the particular usefulness that MTurk can provide for sampling from hard-to-reach populations, which we characterize as groups that are in the numerical minority in terms of nationwide representation. To clarify, we focus our discussion on populations that are traditionally hard to reach in the context of contemporary organizational research within the United States.
Landers and Behrend (2015) call for editors and reviewers to resist using heuristics when evaluating samples in research as well as for researchers to cautiously consider choosing the samples appropriate for their research questions. Whereas we fully agree with the former conclusion, we believe the latter can be extended even further to encourage researchers to embrace the strengths of their samples for understanding their research rather than simply defending their samples. We believe that samples are not inherently better or worse but rather better suited for different research objectives. In this commentary, we identify three continua on which research goals can differ to demonstrate that all samples can inform science. Depending on the position of one's research on these continua, different samples exhibit different strengths; the continua described below can be used to anchor one's sample to demonstrate how it can benefit, rather than limit, research conclusions. As discussed in the focal article, researchers will often apologize for their convenience samples as one of a litany of limitations; we hope that researchers will move sampling issues out of the limitations section and into the main discussion.
We agree with the authors of the focal article that too little attention is paid to sampling in industrial–organizational (I-O) psychology research. Upon reflection and in response to the focal article by Landers and Behrend (2015), we answer three primary questions: (a) What is it about our training, science, and practice as I-O psychologists that has led to less focus on sampling issues? (b) Does it matter? (c) If so, then what should we do about it?
In this article, we highlight why and how industrial and organizational psychologists can take advantage of research on 21st century skills and their assessment. We present vital theoretical perspectives, a suitable framework for assessment, and exemplary instruments with a focus on advances in the assessment of human capital. Specifically, complex problem solving (CPS) and collaborative problem solving (ColPS) are two transversal skills (i.e., skills that span multiple domains) that are generally considered critical in the 21st century workplace. The assessment of these skills in education has linked fundamental research with practical applicability and has provided a useful template for workplace assessment. Both CPS and ColPS capture the interaction of individuals with problems that require the active acquisition and application of knowledge in individual or group settings. To ignite a discussion in industrial and organizational psychology, we discuss advances in the assessment of CPS and ColPS and propose ways to move beyond the current state of the art in assessing job-related skills.
We find Neubert, Mainert, Kretzschmar, and Greiff's (2015) article to be worth discussing and embracing because it represents not only a pragmatic offering of two important constructs for 21st century work but also an important opportunity for industrial–organizational (I-O) scholars and practitioners to consider several questions related to the future of I-O psychology. Neubert et al. correctly identified the broad trends that are influencing the economic environment that we live in and made a compelling argument that I-O psychologists should join other researchers and policymakers from ancillary fields to identify and measure the unique competencies and skills that will determine success in the future of work. In our own research on new technologies and their use in talent assessment and selection (e.g., mobile device testing), we have often considered other future-related research questions, and we would like to offer them here as a supplement to this discussion in the hopes that it might spur further forward-thinking conversation, research, and practice. Below we offer five additional themes to organize the questions that we believe are important to consider as I-O psychologists evaluate the merits and uses of 21st century skills such as complex problem solving and collaborative problem solving (CPS and ColPS).
Neubert, Mainert, Kretzschmar, and Greiff (2015) plea to integrate the 21st century skills of complex problem solving (CPS) and collaborative problem solving (ColPS) in the assessment and development suite of industrial and organizational (I-O) psychologists, given the expected increase in nonroutine and interactive tasks in the new workplace. At the same time, they promote new ways of assessing these skills using computer-based microworlds, enabling the systematic variation of problem features in assessment. Neubert and colleagues’ (2015) suggestions are a valuable step in connecting differential psychologists’ models of human differences and functioning with human resources professionals’ interest in understanding and predicting behavior at work. We concur that CPS and ColPS are important transversal skills, useful for I-O psychologists, but these are only two babies of a single family, and the domain of 21st century skills includes other families of a different kind that are also with utility for I-O psychologists. The current contribution is meant to broaden this interesting discussion in two important ways. We clarify that CPS and ColPS need to be considered in the context of a wider set of 21st century skills with an origin in the education domain, and we highlight a number of crucial steps that still need to be taken before “getting started” (Neubert et al., 2015, p. last page of the discussion) with this taxonomic framework. But first, we feel the need to slightly reframe the relevance of considering 21st century skills in I-O psychology by shifting the attention from narrow task-related skills to the broader domain of career management competencies.
In only a very few places, Neubert, Mainert, Kretzschmar, & Greiff (2015) mention the role of communication and coordination among team members in collaborative problem solving. Although complex and collaborative problem solving is indeed an imperative for team and organizational success in the 21st century, it is easier said than done. Collaborative problem solving is critically dependent on the communication and interaction skills of the team members and of the team leader. The intent of this commentary is to shine a light on the critical role of interpersonal and communication skills in complex and collaborative problem solving.
Neubert, Mainert, Kretzschmar, and Greiff (2015) rightly argue that today's business world requires employees to frequently engage in nonroutine, creative, and interactive tasks. The authors go further to describe two potentially important skills—complex problem solving and collaborative problem solving—which they believe can address gaps in our current understanding of employee skill assessment. I contend however that the authors might be reinventing the wheel with this framework, given that the already popular practice of competency modeling satisfies the very deficiencies that the authors argue exist. To expand on this argument, I will first provide a brief history and discussion of what competency modeling is, followed by an explanation of several key benefits of this approach in terms of addressing the authors’ concerns. Then, on the basis of my applied experience as an external consultant, I will discuss how I might use competency modeling to address one of the authors’ own example scenarios, which should help identify ways in which competency modeling subsumes Neubert and colleagues’ approach.
In the past few years, the term “21st century skills” has gained increasing popularity in educational research and business practice. Neubert, Mainert, Kretzschmar, and Greiff (2015) advocated for the utility of assessing 21st century skills in industrial–organizational psychology (I-O psychology) and its superiority over assessing basic psychological constructs, using application-oriented constructs, or conducting job analysis to determine relevant skills for individual work settings. We argue, however, that the issues identified and discussed in the focal article are rather threefold and that, to integrate 21st century skills into I-O psychology and to bridge organizational science and practice, we need to have (a) a standard framework of clearly defined constructs, (b) innovative assessments, and (c) evidence for validity generalization. We elaborate below.