Hostname: page-component-8448b6f56d-cfpbc Total loading time: 0 Render date: 2024-04-20T01:52:03.168Z Has data issue: false hasContentIssue false

Too early to call: What we do (not) know about the validity of cybervetting

Published online by Cambridge University Press:  09 September 2022

Franz Wilhelm Mönke*
Affiliation:
Department of Psychology, University of Münster, Münster, Germany
Philipp Schäpers
Affiliation:
Department of Psychology, University of Münster, Münster, Germany
*
*Corresponding author. Email: franz.moenke@uni-muenster.de
Rights & Permissions [Opens in a new window]

Abstract

Type
Commentaries
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of the Society for Industrial and Organizational Psychology

There is no doubt: Cybervetting is now recruiters’ everyday routine (Berkelaar & Buzzanell, Reference Berkelaar and Buzzanell2015; Hartwell & Campion, Reference Hartwell and Campion2020). Thus, we could not agree more with Wilcox etal. (Reference Wilcox, Damarin and McDonald2022) that it is important to address the considerable gap between cybervetting practice and research, especially concerning its validity. Their article is a wake-up call that most questions in this issue are still unanswered. We see the point of their distinct skepticism and agree that the validity of cybervetting is unproven. Nonetheless, we argue that this chapter needs a closer look before cybervetting is said to have “dubious effectiveness” (Wilcox etal., Reference Wilcox, Damarin and McDonald2022, p. X), leads to “dampened productivity” (p. X), and “limits freedom of expression” (p. X). Although there is good reason to be cautious, we put into question whether cybervetting is an “unreflective activity undertaken by individual hiring agents within organizations that alternately ignore or encourage it” (p. X). Many researchers have warned about the risks of cybervetting early on (e.g., Brown & Vaughn, Reference Brown and Vaughn2011; Caers & Castelyns, Reference Caers and Castelyns2011; Davison etal., Reference Davison, Maraist and Bing2011); job candidates report intentions to withdraw applications (Schneider etal., Reference Schneider, Goffin and Daljeet2015; Suen, Reference Suen2018) and sue organizations (Stoughton etal., Reference Stoughton, Thompson and Meade2015) if cybervetting is used. Recruiters also report ambiguity about this practice (e.g., Hoek etal., Reference Hoek, O’Kane and McCracken2016; McDonald etal., Reference McDonald, Damarin, McQueen and Grether2021; Pike etal., Reference Pike, Bateman and Butler2018). Rather than stakeholders “uncritically” (Wilcox etal., Reference Wilcox, Damarin and McDonald2022, p. X) accepting cybervetting, we see a vivid debate. Social media (SM) platforms might be a modern, easily available, less faked, and economic source for first impressions of applicants’ traits, replacing traditional resumés (Davison etal., Reference Davison, Maraist and Bing2011; Landers & Schmidt, Reference Landers and Schmidt2016)—if there is evidence for its validity. The debate started when Davison etal. (Reference Davison, Maraist and Bing2011), Roth etal. (Reference Roth, Bobko, Van Iddekinge and Thatcher2016), and Landers and Schmidt (Reference Landers and Schmidt2016) provided mixed conclusions about cybervetting validity. Following, we shine a light on previous findings regarding cybervetting’s validity, deepen the still unknowns, and call for further action.

What we know so far about cybervetting validity: Is what you see what you get?

In their focal article, Wilcox etal. (Reference Wilcox, Damarin and McDonald2022) described what criteria recruiters are looking for when cybervetting: Recruiters attempt (a) to infer the applicant’s person–job (P–J) fit, (b) to infer the person–organization (P–O) fit (Hoek etal., Reference Hoek, O’Kane and McCracken2016; Roulin & Bangerter, Reference Roulin and Bangerter2013), and (c) to screen for red flags (e.g., drug abuse; Hartwell & Campion, Reference Hartwell and Campion2020; McDonald etal., Reference McDonald, Damarin, McQueen and Grether2021). We propose that cybervetting should be seen as a lens model in the sense of Brunswik (Reference Brunswik1952): Social media (SM) content serves as a lens through which recruiters seek to infer applicants’ characteristics (e.g., conscientiousness, professionalism). Based on this, they judge applicants’ P–J or P–O fit and decide which candidate(s) to pursue. Thus, SM content is used to observe P–J fit, P–O fit, and red flags, which, in turn, should predict performance.

According to Gosling etal. (Reference Gosling, Ko, Mannarelli and Morris2002), an observer’s accuracy—in this case referring to a recruiter’s judgment—hinges on cue validity (i.e., how a trait manifests in a cue, such as how conscientiousness manifests in a Facebook posting) and cue utilization (i.e., how a certain cue is used, such as whether a particular Facebook posting is used to evaluate conscientiousness). Thus, the validity of cybervetting depends on (a) whether a SM profile includes cues for P–J fit, P–O fit, and red flag behavior; (b) whether P–J, P–O fit, and red flags are valid criteria for predicting an applicant’s traits and performance; and (c) how each of these cues is used. It is clear what recruiters use cybervetting for and that these criteria are good starting points in evaluating applicants: P–J fit (knowledge, skills, and abilities) and P–O fit (values) are valid predictors of performance (Kristof-Brown etal., Reference Kristof-Brown, Zimmerman and Johnson2005). Accordingly, the cybervetting debate focuses on convergent validity (i.e., whether cybervetting judgments converge with established measures of these cues), criterion-related validity (i.e., whether cues from cybervetting judgments can predict performance), and risks for validity (reliability, biases).

Convergent validity

Many studies have addressed the convergence of self-rated personality traits and SM-based ratings: Schroeder etal. (Reference Schroeder, Odd and Whitaker2020) pointed out that cybervetting judgments cannot replace established measures on the Big Five, nor could they adequately measure professionalism, written communication skills, and cognitive ability.

On the contrary, substantial correlations for the Big Five have been reported for personal blogs (Vazire & Gosling, Reference Vazire and Gosling2004), Facebook profiles (e.g., Back etal., Reference Back, Stopfer, Vazire, Gaddis, Schmukle, Egloff and Gosling2010), and LinkedIn profiles (Roulin & Levashina, Reference Roulin and Levashina2019; Van de Ven etal., Reference Van de Ven, Bogaert, Serlie, Brandt and Denissen2017). Two meta-analyses concluded that SM profiles contain substantial traces of one’s personality traits (Azucar etal., Reference Azucar, Marengo and Settanni2018; Settanni etal., Reference Settanni, Azucar and Marengo2018). One possible explanation for diverging findings was provided by Roulin and Levashina (Reference Roulin and Levashina2019) who found convergent validity only for visible skills and traits (e.g., extraversion, planning). Thus, we conclude that SM content might be able to provide convergence for certain (visible) traits or at least give a first impression of an applicant’s personality.

Criterion-related validity

Performance-related validity of cybervetting remains a controversial issue: Kluemper etal. (Reference Kluemper, Rosen and Mossholder2012) and Kluemper and Rosen (Reference Kluemper and Rosen2009) found good performance-related validity of cybervetting. Roulin and Levashina (Reference Roulin and Levashina2019) found that LinkedIn judgments predict whether applicants obtain a job and get promoted. Aguado etal. (Reference Aguado, Andrés, García-Izquierdo and Rodríguez2019) reported that LinkedIn profiles predict productive hours and employees’ potential, but Cubrich etal. (Reference Cubrich, King, Mracek, Strong, Hassenkamp, Vaughn and Dudley2021; cybervetting on LinkedIn) and Van Iddekinge etal. (Reference Van Iddekinge, Lanivich, Roth and Junco2016; cybervetting on Facebook) reported that cybervetting “correlate[s] essentially zero” (Van Iddekinge etal., Reference Van Iddekinge, Lanivich, Roth and Junco2016, p. 1832) with job performance. Furthermore, red flag postings were not able to predict counterproductive work behavior (but self-reported alcohol abuse; Becton, Walker, Schwager, etal., Reference Becton, Walker, Schwager and Gilstrap2019). Then again, some LinkedIn profile features (e.g., length, number of connections) correlated with users’ personality traits (Fernandez etal., Reference Fernandez, Stöcklin, Terrier and Kim2021; Roulin & Levashina, Reference Roulin and Levashina2019). These contradictory findings provide no clear conclusion on criterion-related validity.

Reliability: Aprerequisite for cybervetting validity

Roth etal. (Reference Roth, Bobko, Van Iddekinge and Thatcher2016) proposed that reliability might be an issue in cybervetting because the unstandardized content of SM profiles might lead to inaccurate assessments. Indeed, LinkedIn profile contents differ between occupations (sales vs. human resources vs. industrial-organizational [I-O] psychology; Zide etal., Reference Zide, Elman and Shahani-Denning2014). On the other hand, an analysis by Zhang etal. (Reference Zhang, Van Iddekinge, Arnold, Roth, Lievens, Lanivich and Jordan2020) found that information on, for example, written communication skills, values, and education is equally present on most SM profiles.

Empirically, some cybervetting studies have reported good interrater agreement (e.g., Kluemper etal., Reference Kluemper, Rosen and Mossholder2012; Roulin & Levashina, Reference Roulin and Levashina2019; Schroeder etal., Reference Schroeder, Odd and Whitaker2020; Van de Ven etal., Reference Van de Ven, Bogaert, Serlie, Brandt and Denissen2017), whereas others did not (e.g., Van Iddekinge etal., Reference Van Iddekinge, Lanivich, Roth and Junco2016; Zhang etal., Reference Zhang, Van Iddekinge, Arnold, Roth, Lievens, Lanivich and Jordan2020). Whereas more structured assessments improved reliability in the study by Roulin and Levashina (Reference Roulin and Levashina2019), other studies—namely those by Schroeder etal. (Reference Schroeder, Odd and Whitaker2020) and Zhang etal. (Reference Zhang, Van Iddekinge, Arnold, Roth, Lievens, Lanivich and Jordan2020)—came to different conclusions: Again, we see that research disagrees. Nonetheless, reliability issues might lower validity estimates of cybervetting.

Biases as a danger to validity: Thinking, fast and wrong

We share the common worry that cybervetting gives recruiters access to more personal information, which could bias hiring decisions—and seriously affect validity. Several job-irrelevant details (e.g., an applicant’s political affiliation, religion, and sexuality) have been shown to influence the recruiters’ judgments even after a bias training (Hartwell & Campion, Reference Hartwell and Campion2020; Roth etal., Reference Roth, Thatcher, Bobko, Matthews, Ellingson and Goldberg2020; Zhang etal., Reference Zhang, Van Iddekinge, Arnold, Roth, Lievens, Lanivich and Jordan2020). Furthermore, biases also appear when recruiters view inappropriate content (drug abuse, partying, rude language; Becton, Walker, Gilstrap, etal., Reference Becton, Walker, Gilstrap and Schwager2019; Tews etal., Reference Tews, Stafford and Kudler2020), spelling mistakes, and text speak (Scott etal., Reference Scott, Sinclair, Short and Bruce2014) and arise due to an applicant’s attractiveness (Carr etal., Reference Carr, Hall, Mason and Varney2017). Concerning gender, contradictory to Wilcox etal.’s (Reference Wilcox, Damarin and McDonald2022) proposition, various studies unexpectedly found higher cybervetting ratings for women than for men (Becton, Walker, Gilstrap, etal., Reference Becton, Walker, Gilstrap and Schwager2019; Roulin & Levashina, Reference Roulin and Levashina2019; Van Iddekinge etal., Reference Van Iddekinge, Lanivich, Roth and Junco2016).

Wilcox etal. (Reference Wilcox, Damarin and McDonald2022) cautioned about stereotypes concerning what an ideal employee would look like, and Roth etal. (Reference Roth, Bobko, Van Iddekinge and Thatcher2016) suggested that this perfect candidate’s benchmark may be recruiters themselves: Similarity attraction could be the main mechanism recruiters use to judge applicants when personal information becomes available in cybervetting. That is, if the recruiter and applicant share similar views or preferences, this might lead to interpersonal attraction, which would result in positively biased hireability judgments (Roth etal., Reference Roth, Goldberg and Thatcher2017). Two recent studies supported this bias for applicants’ political affiliation (Roth etal., Reference Roth, Thatcher, Bobko, Matthews, Ellingson and Goldberg2020; Wade etal., Reference Wade, Roth, Thatcher and Dinger2020): Both studies found large effects of political similarity on cybervetting judgments, remarkably exceeding the effect of individuating information (e.g., grades, work experience). This gives initial evidence on the important role of similarity-attraction biases in cybervetting.

What we do not know so far: An updated research agenda

By now, we can conclude that (a) some evidence exists concerning convergent validity, (b) recruiters focus (at least partly) on valid criteria, but (c), overall, cybervetting’s validity remains controversial due to contradictory results, unclear reliability, and evidence on judgment biases. To make sense of this, one might take a closer look at moderator and mediator variables in these studies. Thus, in the following, we suggest starting points for future research questions to address the influences of methods, recruiters, job candidates, and context.

Toward more robust conclusions through research design

We see several opportunities to extend previous research designs. So far, many findings in cybervetting have relied on interviews (e.g., Berkelaar & Buzzanell, Reference Berkelaar and Buzzanell2015; Hoek etal., Reference Hoek, O’Kane and McCracken2016; McDonald etal., Reference McDonald, Damarin, McQueen and Grether2021); although interviews are well-suited to explore new issues, we call for more studies that test the proposed hypotheses (see Davison etal., Reference Davison, Maraist and Bing2011; Landers & Schmidt, Reference Landers and Schmidt2016; Roth etal., Reference Roth, Bobko, Van Iddekinge and Thatcher2016; Wilcox etal., Reference Wilcox, Damarin and McDonald2022). Only a few cybervetting studies have used experimental designs: As self-insight is limited, especially regarding judgment biases, it might be risky to rely on what recruiters believe they use to judge applicants in cybervetting. Rather, adopting experimental approaches (e.g., Becton, Walker, Gilstrap, etal., Reference Becton, Walker, Gilstrap and Schwager2019; Roth etal., Reference Roth, Thatcher, Bobko, Matthews, Ellingson and Goldberg2020) might be fruitful for shining a light on the mechanisms involved in cybervetting. Approaches using experimental test validation would add to common approaches by examining how cybervetting causally affects selection-related outcomes (see Krumm etal., Reference Krumm, Hüffmeier and Lievens2019; Schäpers etal., Reference Schäpers, Mussel, Lievens, König, Freudenstein and Krumm2020).

Furthermore, Roulin and Levashina (Reference Roulin and Levashina2019) stressed the importance of trait visibility in cybervetting’s validity. This might be a key to understanding the previous debate: Cybervetting is a rating by others (i.e., recruiters, supervisors) and, thus, depends on particular traits’ observability. Thus, the limited convergent validity of cybervetting might be due to the fundamentally different perspective between the self and others (Vazire, Reference Vazire2010). Thus, one might evaluate validity coefficients not only in comparison with self-ratings but also with valid other ratings in personnel selection (e.g., interviews, assessment center; see Lievens & Van Iddekinge, Reference Lievens and Van Iddekinge2016).

Many cybervetting studies have focused on Big Five judgments. However, studies have shown that broad Big Five traits are not the best predictors of performance (Judge etal., Reference Judge, Rodell, Klinger, Simon and Crawford2013; Shaffer & Postlethwaite, Reference Shaffer and Postlethwaite2012)—even if measured with self-reports. Rather, it might be promising to consider the symmetry principle between predictor and criterion (see Schulze etal., Reference Schulze, West, Freudenstein, Schäpers, Mussel, Eid and Krumm2021), and focus on applicant characteristics that are more closely related to performance (e.g., contextualized traits, integrity, cognitive ability; Sackett etal., Reference Sackett, Zhang, Berry and Lievens2021). Although cybervetting judgments only show “low correlations” (Wilcox etal., Reference Wilcox, Damarin and McDonald2022, p. X) to performance, the results, for example, by Kluemper etal. (Reference Kluemper, Rosen and Mossholder2012), are not very different from validity coefficients that can be expected for the Big Five (see Sackett etal., Reference Sackett, Zhang, Berry and Lievens2021). This underlines how the cybervetting debate can draw from research in personnel selection (Lievens & Van Iddekinge, Reference Lievens and Van Iddekinge2016).

Cybervetting studies could further improve their robustness by applying structural equation modeling to separate latent traits (e.g., Big Five, cognitive ability) from measurement error. In this context, we suggest that convergent validity could be approached using the established multitrait-multimethod correlation framework and, thereby, trait and method effects can be differentiated (Eid etal., Reference Eid, Nussbeck, Geiser, Cole, Gollwitzer and Lischetzke2008). Concerning recruiter effects (e.g., judgment biases) in cybervetting, studies should also consider multilevel analyses when they use nested designs—that is, when many applicant profiles are clustered in a few recruiters’ judgments.

Investigating the role of recruiter characteristics in cybervetting

Cybervetting depends on recruiters’ judgments. As shown by studies on judgment biases, recruiter’s attitudes and characteristics can play an important role in validity. However, recruiters’ decision processes in cybervetting have largely remained a black box (for first insights, see Hartwell & Campion, Reference Hartwell and Campion2020). Influential factors in cybervetting judgments might be recruiters’ own use of SM platforms (Nikolaou, Reference Nikolaou2014), their sense of ethics (McDonald etal., Reference McDonald, Damarin, McQueen and Grether2021), and their perceptions of professionalism (Berkelaar & Buzzanell, Reference Berkelaar and Buzzanell2015).

Addressing recruiter biases, many questions remain. For example, we mentioned above that similarity attraction might be a primary mechanism in cybervetting. Although only political affiliation has been tested so far (Roth etal., Reference Roth, Thatcher, Bobko, Matthews, Ellingson and Goldberg2020; Wade etal., Reference Wade, Roth, Thatcher and Dinger2020), this bias might also be triggered by, for example, personality, values, and preferences (Byrne, Reference Byrne1997; Van Hoye & Turban, Reference Van Hoye and Turban2015). Likewise, demographic similarity biases (age, race, gender) have remained unaddressed. Researchers should also focus on developing effective training for recruiters against cybervetting biases, as previous attempts have failed (Schroeder etal., Reference Schroeder, Odd and Whitaker2020; Zhang etal., Reference Zhang, Van Iddekinge, Arnold, Roth, Lievens, Lanivich and Jordan2020).

Investigating the role of applicant characteristics in cybervetting

Cybervetting’s cue validity depends on the SM content provided by applicants. Therefore, applicants’ attitudes and characteristics might influence cybervetting validity, too. For instance, Aguado etal. (Reference Aguado, Rico, Rubio and Fernández2016), Cook etal. (Reference Cook, Jones-Chick, Roulin and O’Rourke2020), and Stoughton etal. (Reference Stoughton, Thompson and Meade2015) reported distinct attitudes to cybervetting. How does skepticism toward cybervetting influence cybervetting validity? Based on feelings of surveillance (Duffy & Chan, Reference Duffy and Chan2019), applicants might increase their faking behavior, which, in turn, might decrease cybervetting’s validity (Guillory & Hancock, Reference Guillory and Hancock2012; Schroeder & Cavanaugh, Reference Schroeder and Cavanaugh2018). Finally, an applicant’s age and gender also seem to influence attitudes toward cybervetting (Aguado etal., Reference Aguado, Rico, Rubio and Fernández2016; Cook etal., Reference Cook, Jones-Chick, Roulin and O’Rourke2020).

Maybe it depends: Context effects

Finally, in practice, every cybervetting procedure happens with respect to a specific job position and in a certain industry. Wilcox etal. (Reference Wilcox, Damarin and McDonald2022) pointed out the moderating effect of branches in cybervetting, and initial reports have also shown differences between cultures concerning SM use (Shields & Levashina, Reference Shields, Levashina, Landers and Schmidt2016) and cybervetting (El Ouirdi etal., Reference El Ouirdi, Pais, Segers and El Ouirdi2016). Thus, we suggest that context effects should be tested in cybervetting, as cybervetting criteria might differ for a job position in, for instance, accounting versus management of a firm.

Overall, cybervetting studies have generally evaluated validity on either Facebook or LinkedIn. Although most studies have focused on Facebook, every SM platform has its unique content and implications (e.g., LinkedIn endorsements). For instance, applicants consider LinkedIn to be more job-related than other platforms (Aguado etal., Reference Aguado, Rico, Rubio and Fernández2016; Cook etal., Reference Cook, Jones-Chick, Roulin and O’Rourke2020), and recruiters prefer LinkedIn for evaluating P–J fit but Facebook for judging personality (Hartwell & Campion, Reference Hartwell and Campion2020; Roulin & Bangerter, Reference Roulin and Bangerter2013). Also, LinkedIn might provide more structure to cybervetting (Lievens & Van Iddekinge, Reference Lievens and Van Iddekinge2016). Therefore, studies need to test the assumption of improved validity when using professional SM platforms for cybervetting. Finally, recruiters use multiple sources while cybervetting (Berkelaar & Buzzanell, Reference Berkelaar and Buzzanell2015) in that they tend to combine impressions from, for example, Facebook and LinkedIn rather than relying on only one platform. Thus, future studies should extend their focus to joint validity of platforms as well as to additional platforms (e.g., Instagram, TikTok) to draw a comprehensive picture about cybervetting validity.

Conclusion

Is cybervetting valuable? We emphasize that one point to consider when answering this question is cybervetting’s validity. However, although previous validity studies have given initial valuable insights, they have also reported contradictory results, leaving many open questions. Thus, it is still too early to come to conclusions about cybervetting’s validity; to do so, we need to consider several more variables. Therefore, this controversy might be part of a new signal’s adaptation process (Bangerter etal., Reference Bangerter, Roulin and König2012; Roulin & Bangerter, Reference Roulin and Bangerter2013).

We share Wilcox etal.’s (Reference Wilcox, Damarin and McDonald2022) view that job candidates are not responsible for evaluating the accuracy of cybervetting; this is the responsibility of I-O psychology, specifically with respect to testing propositions empirically, addressing moderators as well as mediators, and informing practitioners about findings. Furthermore, we should take action fast (see White etal., Reference White, Ravid, Siderits and Behrend2022): Although cybervetting might be a rather new practice from a scientific point of view, it is not new to the organizations that do it. For that reason, above we offered suggestions on how to provide well-balanced conclusions about cybervetting. Now is the time to build on these first findings, and now is the time to echo the many calls for insights on cybervetting’s validity.

Footnotes

Franz Wilhelm Mönke and Philipp Schäpers thank the State of North Rhine-Westphalia’s Ministry of Economic Affairs, Innovation, Digitalization, and Energy as well as the Exzellenz Start-up Center. NRW program at the REACH–EUREGIO Start-Up Center for their kind support of their work. Franz Wilhelm Mönke, https://orcid.org/0000-0002-6634-0193; Philipp Schäpers, https://orcid.org/0000-0002-8270-5105.

Correspondence concerning this commentary should be addressed to Franz Wilhelm Mönke, Division Psychology of Entrepreneurship, University of Münster, Fliednerstraße 21, 48149 Münster, Germany.

References

Aguado, D., Andrés, J. C., García-Izquierdo, A. L., & Rodríguez, J. (2019). LinkedIn “Big Four”: Job performance validation in the ICT sector. Journal of Work and Organizational Psychology, 35(2), 5364. https://doi.org/10.5093/jwop2019a7 Google Scholar
Aguado, D., Rico, R., Rubio, V. J., & Fernández, L. (2016). Applicant reactions to social network web use in personnel selection and assessment. Journal of Work and Organizational Psychology, 32(3), 183190. https://doi.org/10.1016/j.rpto.2016.09.001 Google Scholar
Azucar, D., Marengo, D., & Settanni, M. (2018). Predicting the Big 5 personality traits from digital footprints on social media: Ameta-analysis. Personality and Individual Differences, 124(April), 150159. https://doi.org/10.1016/j.paid.2017.12.018 CrossRefGoogle Scholar
Back, M. D., Stopfer, J. M., Vazire, S., Gaddis, S., Schmukle, S. C., Egloff, B., & Gosling, S. D. (2010). Facebook profiles reflect actual personality, not self-idealization. Psychological Science, 21(3), 372374. https://doi.org/10.1177/0956797609360756 CrossRefGoogle Scholar
Bangerter, A., Roulin, N., & König, C. J. (2012). Personnel selection as a signaling game. Journal of Applied Psychology, 97(4), 719738. https://doi.org/10.1037/a0026078 CrossRefGoogle ScholarPubMed
Becton, J. B., Walker, H. J., Gilstrap, J. B., & Schwager, P. H. (2019). Social media snooping on job applicants: The effects of unprofessional social media information on recruiter perceptions. Personnel Review, 48(5), 12611280. https://doi.org/10.1108/PR-09-2017-0278 CrossRefGoogle Scholar
Becton, J. B., Walker, H. J., Schwager, P., & Gilstrap, J. B. (2019). Is what you see what you get? Investigating the relationship between social media content and counterproductive work behaviors, alcohol consumption, and episodic heavy drinking. The International Journal of Human Resource Management, 30(15), 22512272. https://doi.org/10.1080/09585192.2017.1314977 CrossRefGoogle Scholar
Berkelaar, B. L., & Buzzanell, P. M. (2015). Online employment screening and digital career capital: Exploring employers’ use of online information for personnel selection. Management Communication Quarterly, 29(1), 84113. https://doi.org/10.1177/0893318914554657 Google Scholar
Brown, V. R., & Vaughn, E. D. (2011). The writing on the (Facebook) wall: The use of social networking sites in hiring decisions. Journal of Business and Psychology, 26(2), 219225. https://doi.org/10.1007/s10869-011-9221-x CrossRefGoogle Scholar
Brunswik, E. (1952). The conceptual framework of psychology. The University of Chicago Press.Google Scholar
Byrne, D. (1997). An overview (and underview) of research and theory within the attraction paradigm. Journal of Social and Personal Relationships, 14(3), 417431. https://doi.org/10.1177/0265407597143008 CrossRefGoogle Scholar
Caers, R., & Castelyns, V. (2011). LinkedIn and Facebook in Belgium: The influences and biases of social network sites in recruitment and selection procedures. Social Science Computer Review, 29(4), 437448. https://doi.org/10.1177/0894439310386567 CrossRefGoogle Scholar
Carr, C. T., Hall, R. D., Mason, A. J., & Varney, E. J. (2017). Cueing employability in the Gig Economy: Effects of task-relevant and task-irrelevant information on Fiverr. Management Communication Quarterly, 31(3), 409428. https://doi.org/10.1177/0893318916687397 CrossRefGoogle Scholar
Cook, R., Jones-Chick, R., Roulin, N., & O’Rourke, K. (2020). Job seekers’ attitudes toward cybervetting: Scale development, validation, and platform comparison. International Journal of Selection and Assessment, 28(4), 383398. https://doi.org/10.1111/ijsa.12300 CrossRefGoogle Scholar
Cubrich, M., King, R. T., Mracek, D. L., Strong, J. M. G., Hassenkamp, K., Vaughn, D., & Dudley, N. M. (2021). Examining the criterion-related validity evidence of LinkedIn profile elements in an applied sample. Computers in Human Behavior, 120, Article 106742. https://doi.org/10.1016/j.chb.2021.106742 CrossRefGoogle Scholar
Davison, H. K., Maraist, C., & Bing, M. N. (2011). Friend or foe? The promise and pitfalls of using social networking sites for HR decisions. Journal of Business and Psychology, 26(2), 153159. https://doi.org/10.1007/s10869-011-9215-8 CrossRefGoogle Scholar
Duffy, B. E., & Chan, N. K. (2019). “You never really know who’s looking”: Imagined surveillance across social media platforms. New Media & Society, 21(1), 119138. https://doi.org/10.1177/1461444818791318 CrossRefGoogle Scholar
Eid, M., Nussbeck, F. W., Geiser, C., Cole, D. A., Gollwitzer, M., & Lischetzke, T. (2008). Structural equation modeling of multitrait-multimethod data: Different models for different types of methods. Psychological Methods, 13(3), 230253. https://doi.org/10.1037/a0013219 CrossRefGoogle ScholarPubMed
El Ouirdi, M., Pais, I., Segers, J., & El Ouirdi, A. (2016). The relationship between recruiter characteristics and applicant assessment on social media. Computers in Human Behavior, 62, 415422. https://doi.org/10.1016/j.chb.2016.04.012 CrossRefGoogle Scholar
Fernandez, S., Stöcklin, M., Terrier, L., & Kim, S. (2021). Using available signals on LinkedIn for personality assessment. Journal of Research in Personality, 93, Article 104122. https://doi.org/10.1016/j.jrp.2021.104122 CrossRefGoogle Scholar
Gosling, S. D., Ko, S. J., Mannarelli, T., & Morris, M. E. (2002). A room with a cue: Personality judgments based on offices and bedrooms. Journal of Personality and Social Psychology, 82(3), 379398. https://doi.org/10.1037/0022-3514.82.3.379 CrossRefGoogle ScholarPubMed
Guillory, J., & Hancock, J. T. (2012). The effect of Linkedin on deception in resumes. Cyberpsychology, Behavior, and Social Networking, 15(3), 135140. https://doi.org/10.1089/cyber.2011.0389 CrossRefGoogle ScholarPubMed
Hartwell, C. J., & Campion, M. A. (2020). Getting social in selection: How social networking website content is perceived and used in hiring. International Journal of Selection and Assessment, 28(1), 116. https://doi.org/10.1111/ijsa.12273 CrossRefGoogle Scholar
Hoek, J., O’Kane, P., & McCracken, M. (2016). Publishing personal information online: How employers’ access, observe and utilise social networking sites within selection procedures. Personnel Review, 45(1), 6783. https://doi.org/10.1108/PR-05-2014-0099 CrossRefGoogle Scholar
Judge, T. A., Rodell, J. B., Klinger, R. L., Simon, L. S., & Crawford, E. R. (2013). Hierarchical representations of the five-factor model of personality in predicting job performance: Integrating three organizing frameworks with two theoretical perspectives. Journal of Applied Psychology, 98(6), 875925. https://doi.org/10.1037/a0033901 CrossRefGoogle ScholarPubMed
Kluemper, D. H., & Rosen, P. A. (2009). Future employment selection methods: Evaluating social networking websites. Journal of Managerial Psychology, 24(6), 567580. https://doi.org/10.1108/02683940910974134 CrossRefGoogle Scholar
Kluemper, D. H., Rosen, P. A., & Mossholder, K. W. (2012). Social networking websites, personality ratings, and the organizational context: More than meets the eye? Journal of Applied Social Psychology, 42(5), 11431172. https://doi.org/10.1111/J.1559-1816.2011.00881.X CrossRefGoogle Scholar
Kristof-Brown, A. L., Zimmerman, R. D., & Johnson, E. C. (2005). Consequences of individuals’ fit at work: Ameta-analysis of person-job, person-organization, person-group, and person-supervisor fit. Personnel Psychology, 58(2), 281342. https://doi.org/10.1111/j.1744-6570.2005.00672.x CrossRefGoogle Scholar
Krumm, S., Hüffmeier, J., & Lievens, F. (2019). Experimental test validation: Examining the path from test elements to test performance. European Journal of Psychological Assessment, 35(2), 225232. https://doi.org/10.1027/1015-5759/a000393 Google Scholar
Landers, R. N., & Schmidt, G. B. (Eds.). (2016). Social media in employee selection and recruitment: Theory, practice, and current challenges. Springer. https://doi.org/10.1007/978-3-319-29989-1 CrossRefGoogle Scholar
Lievens, F., & Van Iddekinge, C. H. (2016). Reducing the noise from scraping social media content: Some evidence-based recommendations. Industrial and Organizational Psychology, 9(3), 660666. https://doi.org/10.1017/iop.2016.67 CrossRefGoogle Scholar
McDonald, S., Damarin, A. K., McQueen, H., & Grether, S. T. (2021). The hunt for red flags: Cybervetting as morally performative practice. Socio-Economic Review, Article mwab002. https://doi.org/10.1093/ser/mwab002 Google Scholar
Nikolaou, I. (2014). Social networking web sites in job search and employee recruitment. International Journal of Selection and Assessment, 22(2), 179189. https://doi.org/10.1111/ijsa.12067 Google Scholar
Pike, J. C., Bateman, P. J., & Butler, B. S. (2018). Information from social networking sites: Context collapse and ambiguity in the hiring process. Information Systems Journal, 28(4), 729758. https://doi.org/10.1111/isj.12158 CrossRefGoogle Scholar
Roth, P. L., Bobko, P., Van Iddekinge, C. H., & Thatcher, J. B. (2016). Social media in employee-selection-related decisions: Aresearch agenda for uncharted territory. Journal of Management, 42(1), 269298. https://doi.org/10.1177/0149206313503018 CrossRefGoogle Scholar
Roth, P. L., Goldberg, C. B., & Thatcher, J. B. (2017). The role of political affiliation in employment decisions: Amodel and research agenda. Journal of Applied Psychology, 102(9), 12861304. https://doi.org/10.1037/apl0000232 Google Scholar
Roth, P. L., Thatcher, J. B., Bobko, P., Matthews, K. D., Ellingson, J. E., & Goldberg, C. B. (2020). Political affiliation and employment screening decisions: The role of similarity and identification processes. Journal of Applied Psychology, 105(5), 472486. https://doi.org/10.1037/apl0000422 CrossRefGoogle ScholarPubMed
Roulin, N., & Bangerter, A. (2013). Social networking websites in personnel selection: Asignaling perspective on recruiters’ and applicants’ perceptions. Journal of Personnel Psychology, 12(3), 143151. https://doi.org/10.1027/1866-5888/a000094 CrossRefGoogle Scholar
Roulin, N., & Levashina, J. (2019). LinkedIn as a new selection method: Psychometric properties and assessment approach. Personnel Psychology, 72(2), 187211. https://doi.org/10.1111/PEPS.12296 CrossRefGoogle Scholar
Sackett, P. R., Zhang, C., Berry, C. M., & Lievens, F. (2021). Revisiting meta-analytic estimates of validity in personnel selection: Addressing systematic overcorrection for restriction of range. Journal of Applied Psychology, Advance online publication. https://doi.org/10.1037/apl0000994 CrossRefGoogle Scholar
Schäpers, P., Mussel, P., Lievens, F., König, C. J., Freudenstein, J.-P., & Krumm, S. (2020). The role of situations in Situational Judgment Tests: Effects on construct saturation, predictive validity, and applicant perceptions. Journal of Applied Psychology, 105(8), 800818. https://doi.org/10.1037/apl0000457 CrossRefGoogle ScholarPubMed
Schneider, T. J., Goffin, R. D., & Daljeet, K. N. (2015). “Give us your social networking site passwords”: Implications for personnel selection and personality. Personality and Individual Differences, 73(January), 7883. https://doi.org/10.1016/j.paid.2014.09.026 CrossRefGoogle Scholar
Schroeder, A. N., & Cavanaugh, J. M. (2018). Fake it ’til you make it: Examining faking ability on social media pages. Computers in Human Behavior, 84, 2935. https://doi.org/10.1016/j.chb.2018.02.011 CrossRefGoogle Scholar
Schroeder, A. N., Odd, K. R., & Whitaker, J. H. (2020). Agree to disagree: Examining the psychometrics of cybervetting. Journal of Managerial Psychology, 35(5), 435450. https://doi.org/10.1108/JMP-09-2018-0420 CrossRefGoogle Scholar
Schulze, J., West, S. G., Freudenstein, J., Schäpers, P., Mussel, P., Eid, M., & Krumm, S. (2021). Hidden framings and hidden asymmetries in the measurement of personality––A combined lens-model and frame-of-reference perspective. Journal of Personality, 89(2), 357375. https://doi.org/10.1111/jopy.12586 CrossRefGoogle ScholarPubMed
Scott, G. G., Sinclair, J., Short, E., & Bruce, G. (2014). It’s not what you say, it’s how you say it: Language use on Facebook impacts employability but not attractiveness. Cyberpsychology, Behavior, and Social Networking, 17(8), 562566. https://doi.org/10.1089/cyber.2013.0584 CrossRefGoogle Scholar
Settanni, M., Azucar, D., & Marengo, D. (2018). Predicting individual characteristics from digital traces on social media: Ameta-analysis. Cyberpsychology, Behavior, and Social Networking, 21(4), 217228. https://doi.org/10.1089/cyber.2017.0384 CrossRefGoogle Scholar
Shaffer, J. A., & Postlethwaite, B. E. (2012). A matter of context: Ameta-analytic investigation of the relative validity of contextualized and noncontextualized personality measures. Personnel Psychology, 65(3), 445494. https://doi.org/10.1111/j.1744-6570.2012.01250.x CrossRefGoogle Scholar
Shields, B., & Levashina, J. (2016). Comparing the social media in the United States and BRIC nations, and the challenges faced in international selection. In Landers, R. N. & Schmidt, G. B. (Eds.), Social media in employee selection and recruitment: Theory, practice, and current challenges (pp. 157174). Springer. https://doi.org/10.1007/978-3-319-29989-1_8 Google Scholar
Stoughton, J. W., Thompson, L. F., & Meade, A. W. (2015). Examining applicant reactions to the use of social networking websites in pre-employment screening. Journal of Business and Psychology, 30, 7388. https://doi.org/10.1007/s10869-013-9333-6 CrossRefGoogle Scholar
Suen, H.-Y. (2018). How passive job candidates respond to social networking site screening. Computers in Human Behavior, 85, 396404. https://doi.org/10.1016/j.chb.2018.04.018 CrossRefGoogle Scholar
Tews, M. J., Stafford, K., & Kudler, E. P. (2020). The effects of negative content in social networking profiles on perceptions of employment suitability. International Journal of Selection and Assessment, 28(1), 1730. https://doi.org/10.1111/ijsa.12277 CrossRefGoogle Scholar
Van de Ven, N., Bogaert, A., Serlie, A., Brandt, M. J., & Denissen, J. J. A. (2017). Personality perception based on LinkedIn profiles. Journal of Managerial Psychology, 32(6), 418429. https://doi.org/10.1108/JMP-07-2016-0220 CrossRefGoogle Scholar
Van Hoye, G., & Turban, D. B. (2015). Applicant-employee fit in personality: Testing predictions from similarity-attraction theory and trait activation theory. International Journal of Selection and Assessment, 23(3), 210223. https://doi.org/10.1111/ijsa.12109 CrossRefGoogle Scholar
Van Iddekinge, C. H., Lanivich, S. E., Roth, P. L., & Junco, E. (2016). Social media for selection? Validity and adverse impact potential of a Facebook-based assessment. Journal of Management, 42(7), 18111835. https://doi.org/10.1177/0149206313515524 CrossRefGoogle Scholar
Vazire, S. (2010). Who knows what about a person? The self–other knowledge asymmetry (SOKA) model. Journal of Personality and Social Psychology, 98(2), 281300. https://doi.org/10.1037/a0017908 CrossRefGoogle ScholarPubMed
Vazire, S., & Gosling, S. D. (2004). e-Perceptions: Personality impressions based on personal websites. Journal of Personality and Social Psychology, 87(1), 123132. https://doi.org/10.1037/0022-3514.87.1.123 CrossRefGoogle ScholarPubMed
Wade, J. T., Roth, P. L., Thatcher, J. B., & Dinger, M. (2020). Social media and selection: Political issue similarity, liking, and the moderating effect of social media platform. Management Information Systems Quarterly, 44(3), 13011357. https://doi.org/10.25300/MISQ/2020/14119 CrossRefGoogle Scholar
White, J., Ravid, D., Siderits, I., & Behrend, T. S. (2022). An urgent call for I-O psychologists to produce timelier technology research. Industrial and Organizational Psychology, 15(3). https://doi.org/10.31234/osf.io/785xf CrossRefGoogle Scholar
Wilcox, A., Damarin, A., & McDonald, S. (2022). Is cybervetting valuable? Industrial and Organizational Psychology, 15(3). https://doi.org/10.31219/osf.io/f52a7 CrossRefGoogle Scholar
Zhang, L., Van Iddekinge, C. H., Arnold, J. D., Roth, P. L., Lievens, F., Lanivich, S. E., & Jordan, S. L. (2020). What’s on job seekers’ social media sites? Acontent analysis and effects of structure on recruiter judgments and predictive validity. Journal of Applied Psychology, 105(12), 15301546. https://doi.org/10.1037/apl0000490 CrossRefGoogle Scholar
Zide, J., Elman, B., & Shahani-Denning, C. (2014). LinkedIn and recruitment: How profiles differ across occupations. Employee Relations, 36(5), 583604. https://doi.org/10.1108/ER-07-2013-0086 CrossRefGoogle Scholar