To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Recent research suggesting that people who maximize are less happy than those who satisfice has received considerable fanfare. The current study investigates whether this conclusion reflects the construct itself or rather how it is measured. We developed an alternative measure of maximizing tendency that is theory-based, has good psychometric properties, and predicts behavioral outcomes. In contrast to the existing maximization measure, our new measure did not correlate with life (dis)satisfaction, nor with most maladaptive personality and decision-making traits. We conclude that the interpretation of maximizers as unhappy may be due to poor measurement of the construct. We present a more reliable and valid measure for future researchers to use.
Prestigious journals are widely admired for publishing quality scholarship, yet the primary indicators of journal prestige (i.e., impact factors) do not directly assess audience admiration. Moreover, the publication landscape has changed substantially in the last 20 years, with electronic publishing changing the way we consume scientific research. Given that it has been 18 years since the publication of the last journal prestige survey of SIOP members, the authors conducted a new survey and used these results to reflect on changing practices within industrial and organizational (I-O) psychology. SIOP members (n = 557) rated the prestige and relevance of I-O and management journals. Responses were analyzed according to job setting, and were compared to a survey conducted by Zickar and Highhouse (2001) in 2000. There was considerable consistency in prestige ratings across settings (i.e., management department vs. psychology department; academic vs. applied), especially among the top journals. There was considerable variance, however, in the perceived usefulness of different journals. Results also suggested considerable consistency across the two time periods, but with some increases in prestige among OB-oriented journals. Changes in the journal landscape are discussed, including the rise of OHP as a topic of concentration in I-O. We suggest that I-O programs will continue to attract the top researchers in talent management and OHP, which should result in the use of a broader set of journals for judging I-O program impact.
Aguinis et al. (2017) contribute interesting analyses of cited sources in contemporary undergraduate industrial-organizational (I-O) psychology textbooks and continue their ongoing investigation into the long-term viability of I-O psychology as a unique discipline (see Aguinis, Bradley, & Brodersen, 2014). These analyses, conducted by authors who are members of business schools, attempt to answer questions related to the nature of work conducted by I-O psychologists, comparing the quality and importance of work conducted by faculty in business schools with that conducted by faculty in psychology departments. One of their general themes is that members of business schools are conducting important research that is influencing the future of I-O psychology by overtaking undergraduate textbooks. As such, the article has the feel of a conquering hero taunting its vanquished foe.
Landers and Behrend (2015) present yet another attempt to limit reviewer and editor reliance on surface characteristics when evaluating the generalizability of study results (see also Campbell, 1986; Dipboye & Flanagan, 1979; Greenberg, 1987; Highhouse, 2009; Highhouse & Gillespie, 2009). Most of the earlier treatments of sample generalizability, however, have focused on the use of college students in (mostly) laboratory studies. Many industrial–organizational (I-O) scholars have experienced the hostility with which studies using students as participants receives. For instance, Jen Gillespie and I observed, “Reviewers and editors commonly assert that students should not be used to study workplace phenomena as though such a declaration requires no further explanation” (Highhouse & Gillespie, 2009, p. 247). The difference this time, however, is that Landers and Behrend (2015) are reacting to dismissals of research using Mechanical Turk (MTurk) workers to make inferences about behavior in organizations. Landers and Behrend (2015) make the important point that any research population is likely to be atypical on some dimensions and that all samples are samples of convenience (see also Oakes, 1972). We agree. Furthermore, we make two observations about MTurk: (a) We believe that it should be met with less resistance than student samples have historically faced, and (b) we suggest that it provides a unique opportunity to bring back randomized experimentation in I-O psychology.
We are gratified by the large number of commentaries to our focal article (Dalal, Bonaccio, et al., 2010) that advocated greater integration of industrial–organizational psychology and organizational behavior (IOOB) with the field of judgment and decision making (JDM). The commentaries were uniformly constructive and civil. Our disagreements with the commentaries are mild and are limited primarily to the roles of external validity, internal validity, and laboratory experiments in IOOB. For the majority of our response, we attempt to build on the views expressed in the commentaries and to articulate some thoughts regarding the future. We structure our response according to the following themes: barriers to cross-fertilization between IOOB and JDM, areas of existing and potential JDM-to-IOOB cross-fertilization, areas of potential IOOB-to-JDM cross-fertilization, and ways to increase (and ideally institutionalize) cross-fertilization. We hope our focal article and our response to the commentaries will help to ignite exciting basic research and important practical applications associated with decision making in the workplace.
The major premise of this article is that increased exposure to—and increased application of—theories, methods, and findings from the judgment and decision-making (JDM) field will aid industrial–organizational psychology and organizational behavior (IOOB) researchers and practitioners in studying workplace decisions. To this end, we first provide evidence of the lack of cross-fertilization between JDM and IOOB and then provide an overview of the JDM research literature. Next, with the aid of a panel of prominent IOOB scholars who share JDM interests, we discuss the philosophical and methodological traditions in IOOB and JDM, the areas in which IOOB has already been enriched by JDM as well as the areas in which it might be further enriched in the future, ways of increasing cross-fertilization from JDM to IOOB, and ways in which IOOB can in turn contribute to JDM. Through this focal article, we hope to spark conversation and ultimately engender more cross-fertilization between JDM and IOOB.
In 1985, the U.S. Army commissioned prominent psychologists to investigate the possibility of extending human capabilities using parapsychological techniques (Swets & Bjork, 1990). Influential members of the army were frustrated by the slow pace of advancements in human performance and believed that large gains could be made using methods outside of the mainstream. They believed that things like mental concentration and guided imagery could allow soldiers to walk through walls, view things remotely, and even kill adversaries by staring at them (Ronson, 2005). Not surprisingly, the panel of psychologists concluded that these ideas were without merit. In reviewing the psychologists’ work, Morrison (1988) observed, “Among the most difficult lessons in science is how not to deceive yourself” (p. 109).
The focus of this article is on implicit beliefs that inhibit adoption of selection decision aids (e.g., paper-and-pencil tests, structured interviews, mechanical combination of predictors). Understanding these beliefs is just as important as understanding organizational constraints to the adoption of selection technologies and may be more useful for informing the design of successful interventions. One of these is the implicit belief that it is theoretically possible to achieve near-perfect precision in predicting performance on the job. That is, people have an inherent resistance to analytical approaches to selection because they fail to view selection as probabilistic and subject to error. Another is the implicit belief that prediction of human behavior is improved through experience. This myth of expertise results in an overreliance on intuition and a reluctance to undermine one’s own credibility by using a selection decision aid.
Email your librarian or administrator to recommend adding this to your organisation's collection.