Skip to main content Accessibility help
×
Hostname: page-component-8448b6f56d-cfpbc Total loading time: 0 Render date: 2024-04-25T00:06:35.364Z Has data issue: false hasContentIssue false

39 - Human Rights, Psychology, and Artificial Intelligence

from Part V - Future Directions

Published online by Cambridge University Press:  02 October 2020

Neal S. Rubin
Affiliation:
Adler University
Roseanne L. Flores
Affiliation:
Hunter College, City University of New York
Get access

Summary

This chapter provides several examples of how artificial intelligence–based technologies are changing human rights practice, from detecting abuses to dealing with their aftermath. It especially focuses on three critical issues where the field of psychology can address a spectrum of human rights needs. The first is the psychological impact of the application of AI within society, specifically the positive and negative impacts of its use within humanitarian and human rights work. The second is the risk of its application perpetuating bias and discrimination. The third is the spread of disinformation and the manipulation of public opinion. While the chapter touches on all three issues, it particularly focuses on the third because of the central role disinformation is currently playing in everything from democratic governance to daily life. For each of these issues, the chapter summarizes how psychological research might provide critical insights for mitigating harm. The chapter closes with priority considerations for minimizing the negative effects of AI on human rights.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2020

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias. ProPublica. www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencingGoogle Scholar
Armour, C., & Ross, J. (2017). The health and well-being of military drone operators and intelligence analysts: A systematic review. Military Psychology, 29(2), 83–98.Google Scholar
Basoglu, M. (Ed.). (2017). Torture and its definition in international law: An interdisciplinary approach. New York, NY: Oxford University Press.CrossRefGoogle Scholar
Basoglu, M., & Salcioglu, E. (2011). A mental healthcare model for mass trauma survivors: Control-focused behavioral treatment of earthquake, war and torture trauma. Cambridge: Cambridge University Press.Google Scholar
Bennett, W. L., & Livingston, S. (2018). The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication, 33(2), 122–139.Google Scholar
Breckenridge, J., & Zimbardo, P. (2007). The strategy of terrorism and the psychology of mass-mediated fear. In Bonger, B., Brown, L., Beutler, L., Breckenridge, J., & Zimbardo, P. (Eds.), Psychology of terrorism (pp. 116–137). New York, NY: Oxford University Press.Google Scholar
Dubberley, S., Griffin, E., & Bal, H. M. (2015). Making secondary trauma a primary issue: A study of eyewitness media and vicarious trauma on the digital frontlines. Eyewitness Media Hub. https://firstdraftnews.org/wp-content/uploads/2018/03/trauma_report.pdfGoogle Scholar
Esteve del Valle, M., & Borge Bravo, R. (2018). Echo chambers in parliamentary Twitter networks: The Catalan case. International Journal of Communication, 12, 1715–1735. https://ijoc.org/index.php/ijoc/article/viewFile/8406/2325Google Scholar
Finley, K. (2015, August 23). Pro-government Twitter bots try to hush Mexican activists. Wired. www.wired.com/2015/08/pro-government-twitter-bots-try-hush-mexican-activists/Google Scholar
Hao, K. (2019, January 21). AI is sending people to jail – and getting it wrong. MIT Technology Review www.technologyreview.com/s/612775/algorithms-criminal-justice-ai/Google Scholar
Harris, T. (2017, April). How a handful of tech companies control billions of minds every day. www.ted.com/talks/tristan_harris_the_manipulative_tricks_tech_companies_use_to_capture_your_attention?language=en#t-299713Google Scholar
Kaplan, A., & Haenlein, M. (2018). Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations and implications of artificial intelligence. Business Horizons, 62, 15–25.Google Scholar
Kaye, D. (2018). Promotion and protection of the right to freedom of opinion and expression. A/73/348. United Nations General Assembly. https://freedex.org/wp-content/blogs.dir/2015/files/2018/10/AI-and-FOE-GA.pdfGoogle Scholar
Knuckey, S., Satterthwaite, M., & Brown, A. (2018). Trauma, depression and burnout in the human rights field: Identifying barriers and pathways to resilient advocacy. Columbia Human Rights Law Review, 49, 267–323.Google Scholar
Koenig, K. A. (2019). “Half the truth is often a great lie”: Deep fakes, open source information, and international criminal law. American Journal of International Law, 113, 250–255.Google Scholar
Koetsier, J. (2018, August 31). This AI can recognize anger, awe, desire, fear, hate, grief, love … by how you touch your phone. Forbes. www.forbes.com/sites/johnkoetsier/2018/08/31/new-tech-could-help-siri-google-assistant-read-our-emotions-through-touch-screens/#2981ca0d2132Google Scholar
Konterra Group. (2019). Amnesty international staff wellbeing review. Washington, DC: Amnesty International. www.amnesty.org/download/Documents/ORG6097632019ENGLISH.PDFGoogle Scholar
Lampros, A., & Koenig, A. (2018, May 12). What students are teaching us about resiliency and human rights. Medium. https://medium.com/humanrightscenter/what-students-are-teaching-us-about-resiliency-and-human-rights-9a34f3af75aGoogle Scholar
Lee, D. (2019, February 5). Researchers create “malicious” writing AI. BBC News. www.bbc.com/news/technology-47249163Google Scholar
Lewis, P., & McCormick, E. (2018, February 2). How an ex-YouTube insider investigated its secret algorithm. Guardian. www.theguardian.com/technology/2018/feb/02/youtube-algorithm-election-clinton-trump-guillaume-chaslotGoogle Scholar
Lomas, N. (2017). Lyrebird is a voice mimic for the fake news era. TechCrunch. https://techcrunch.com/2017/04/25/lyrebird-is-a-voice-mimic-for-the-fake-news-era/Google Scholar
Nyst, C., & Monaco, N. (2018). State-sponsored trolling: How governments are deploying disinformation as part of broader digital harassment campaigns. Institute for the Future. www.iftf.org/fileadmin/user_upload/images/DigIntel/IFTF_State_sponsored_trolling_report.pdfGoogle Scholar
Raso, F. A., Hilligoss, H., Krishnamurthy, V., Bavitz, C., & Kim, L. (2018). Artificial intelligence & human rights: Opportunities & risks. Berkman Klein Center Research Publication.Google Scholar
Rosso, C. (2018, February 6). The human bias in the AI machine: How artificial intelligence is subject to cognitive bias. Psychology Today. www.psychologytoday.com/us/blog/the-future-brain/201802/the-human-bias-in-the-ai-machineGoogle Scholar
Satterthwaite, M., Knuckey, S., Sawhney, R. S., Wightman, K., Bagrodia, R., & Brown, A. (forthcoming 2019). From a “culture of unwellness” to sustainable advocacy: Organizational responses to mental health risks in the human rights field. Southern California Review of Law and Social Justice, 28, 443–554.Google Scholar
Slovic, P., Finucane, M. L., Peters, E., & MacGregor, D. G. (2004). Risk as analysis and risk as feelings: Some thoughts about affect, reason, risk, and rationality. Risk Analysis, 24(2), 311–322.Google Scholar
Tufekci, Z. (2018, March 10). YouTube, the great radicalizer. New York Times. www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.htmlGoogle Scholar
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.Google Scholar
Wexler, R. (2018). Life, liberty, and trade secrets: Intellectual property in the criminal justice system. Stanford Law Review, 70, 1343–1429.Google Scholar
Woolley, S. C., & Guilbeault, D. R. (2017). Computational propaganda in the United States of America: Manufacturing consensus online. Computational Propaganda Research Project, 22.Google Scholar
Woolley, S. C., & Howard, P. N. (2017). Computational propaganda worldwide: Executive summary. Working Paper (11. Oxford, UK), Computational Propaganda Research Project.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×