Hostname: page-component-5c6d5d7d68-ckgrl Total loading time: 0 Render date: 2024-08-07T13:29:53.930Z Has data issue: false hasContentIssue false

Artificial Intelligence and Human Rights: Four Realms of Discussion: Summary of Remarks

Published online by Cambridge University Press:  01 March 2021

Jootaek Lee*
Affiliation:
Assistant Professor and librarian at Rutgers Law School (Newark) as well as Adjunct Professor and an affiliated faculty member for the Program on Human Rights and the Global Economy (PHRGE) at the Northeastern University School of Law.

Extract

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.

Type
Contemporary Human Rights Research: Researching Human Rights and Artificial Intelligence
Copyright
Copyright © The Author(s), 2021. Published by Cambridge University Press on behalf of The American Society of International Law.

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1 Jürgen Schmidhuber, 2006: Celebrating 75 years of AI - History and Outlook: The Next 25 Years 2, Invited Contribution to the Proceedings of the “50th Anniversary Summit of Artificial Intelligence” at Monte Verita, Ascona, Switzerland, July 9–14, 2006 (variant accepted for Springer's LNAI series) (2007).

2 Id. at 1–2.

3 Mathias Risse, Human Rights and Artificial Intelligence: An Urgently Needed Agenda, 41 Hum. Rts. Q. 2 (2019).

4 Id. (citing Julia Angwin, Jeff Larson, Surya Mattu & Lauren Kirchner, Machin Bias, ProPublica (May 23, 2016); Reuben Binns, Fairness in Machine Learning: Lessons from Political Philosophy, 81 Proc. Machine Learning Res. 149 (2018); Brent D. Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi, The Ethics of Algorithms: Mapping the Debate, Big Data & Soc'y 1 (2016); Osonde A. Osoba & William Welser IV, An Intelligence in Our Image: The Risk of Bias and Errors in Artificial Intelligence (2017)).

5 Eileen Donahoe & Megan MacDuffee Metzger, Artificial Intelligence and Human Rights, 30 J. Democracy 115, 115 (2019).

6 World Economic Forum, Harnessing Artificial Intelligence for Earth 7 (Jan. 2018), available at http://www3.weforum.org/docs/Harnessing_Artificial_Intelligence_for_the_Earth_report_2018.pdf.

7 See Jeremy Rifkin, The End of Work: The Decline of Global Labor Force and the Dawn of the Post-market Era 59–164 (1995).

8 World Economic Forum, supra note 6.

9 Howard Gardner, Frames of Mind: The Theory of Multiple Intelligences (2011). It is criticized to include too many aspects of human characters in the definition of intelligence. See Gardner's Theory of Multiple Intelligence, at https://www.verywellmind.com/gardners-theory-of-multiple-intelligences-2795161.

10 See Nick Bostrom, How Long Before Superintelligence?, 2 Int'l J. Future Stud. (1998), at https://www.nickbostrom.com/superintelligence.html.

11 Steven Livingston & Mathias Risse, The Future Impact of Artificial Intelligence on Humans and Human Rights, 33 Eth. & Int'l Aff. 141, 145 (2019) (quoting the comment by Vernor Vinge at the 1993 VISION-21 Symposium). The algorithms of DeepMind Technologies, Google's DeepMind, and Google Brain are the best examples relating to AGI.

12 Schmidhuber, supra note 1, at 7.

13 It is also known as history's convergence or Omega point Ω. Id. at 10–11.

14 See Vernor Vinge (1993), at https://mindstalk.net/vinge/vinge-sing.html; Jesse Parker, Singularity: A Matter of Life and Death, Disruptor Daily (Sept. 13, 2007), at https://www.disruptordaily.com/singularity-matter-life-death.

15 Sophia, a humanoid robot is an example, at https://www.hansonrobotics.com/sophia.

16 Moore's Law suggests that technology has been exponentially improving since 1971. See Moore's Law, Wikipedia, at https://en.wikipedia.org/wiki/Moore%27s_law.

17 The areas to regulate new human life include the human rights to development, climate change, life, health, education, criminal justice, equal protection, due process, work, and privacy.