Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-30T16:48:32.440Z Has data issue: false hasContentIssue false

An Inclusive Ethical Design Perspective for a Flourishing Future with Artificial Intelligent Systems

Published online by Cambridge University Press:  19 December 2018

Abstracts

The article provides an inclusive outlook on artificial intelligence by introducing a three-legged design perspective that includes, but also moves beyond, ethical artificial systems design to stress the role of moral habituation of professionals and the general public. It is held that an inclusive ethical design perspective is essential for a flourishing future with artificial intelligence.

Type
Articles
Copyright
© Cambridge University Press 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

*

Associate Professor, The University of Southern Denmark, Department of Design and Communication.

References

1 AIS broadly covers artificial intelligence in artificial intelligent systems and advanced intelligent robotics.

2 The term “systems developer” refers to engineers, SW-developers, computer scientists, and practitioners involved in developing AIS.

3 Turing, AM, “On Computable Numbers, With an Application to the Entscheidungsproblem” (1937) XLII Proceedings of the London Mathematical Society, Series 2, 236Google Scholar.

4 Turing, AM, “Computing Machinery and Intelligence” (1950) 59 Mind 434Google Scholar.

5 ibid, 438.

6 ibid, 438.

7 Newell, A and Simon, HA, “Computer science as empirical inquiry: symbols and search” (1976) 19(3) CACM 113CrossRefGoogle Scholar.

8 The historical outline in this section also draws on Russell, SJ and Norvig, P, Artificial Intelligence – A Modern Approach (Pearson 2015) pp 1627Google Scholar.

9 ibid, p 25.

10 Dreyfus, HHL, Alchemy and Artificial Intelligence (T Belvoir: Defence Technical Information Center 1965) p 17Google Scholar.

11 Dreyfus, HHL, “Why Heideggerian AI failed and how fixing it would require making it more Heideggerian” (2007) 171(18) Artificial Intelligence 1142CrossRefGoogle Scholar.

12 Christianini, N, “Machines that learn: the mechanics of artificial minds” in D Heaven (ed), Machines that Think (New Scientist 2017) pp 3637Google Scholar.

13 See A Aspuru-Guzik et al, Materials Acceleration Platform – Accelerating Advanced Energy Materials Discovery by Integrating High-Throughput Methods with Artificial Intelligence (January 2018) p ii, available at <mission-innovation.net/wp-content/uploads/2018/01/Mission-Innovation-IC6-Report-Materials-Acceleration-Platform-Jan-2018.pdf > accessed 14 November 2018.

14 Heaven, supra, note 12, p 75.

15 Yudkowsky, E, “Artificial Intelligence as a Positive and Negative Factor in Global Risk” in N Bostrom and MM Ćirković (eds), Global Catastrophic Risks (Oxford University Press 2011) p 308Google Scholar.

16 LeCun, Y et al, “Deep Learning” (2015) 521(28) Nature 436CrossRefGoogle ScholarPubMed.

17 Vossen, P, ‘How AI detectives are cracking open the Black Box of deep learning’ (2017) Science, 6 July <www.sciencemag.org/news/2017/07/how-ai-detectives-are-cracking-open-black-box-deep-learning >, accessed 14 November 2018Google Scholar.

18 Supra, note 14, p 44.

19 Russell, S, Dewey, D, and Tegmark, M, ‘Research Priorities for Robust and Beneficial Artificial Intelligence’ (2015) AI Magazine, Winter, 105CrossRefGoogle Scholar.

20 D Gunning, ‘Explainable Artificial Intelligence (XAI)’ (2018), available at <www.darpa.mil/program/explainable-artificial-intelligence >, accessed 14 November 2018.

21 Supra, note 17.

22 Supra, note 19, p 110, also see Gerdes, A and Øhrstrøm, P, “Issues in robot ethics seen through the lens of a moral Turing Test” (2015) 13(2) Journal of Information, Communication and Ethics in Society 98CrossRefGoogle Scholar.

23 Supra, note 19, p 110.

24 As in the epic case of Microsoft’s twitter bot TAY, which was supposed to learn to communicate via real-time data streams of tweets. However, alas, overnight, Tay’s tweets turned evil-minded. This could have been foreseen if Microsoft had “tweaked” Tay to prevent influence from biased data sets (eg from trolls and people making bad jokes), which would let Tay digest data in a morally sound manner.

25 Wallach, W and Allen, C, Moral Machines – Teaching Robots Right from Wrong (Oxford University Press 2009)CrossRefGoogle Scholar.

26 See NE Book 2, Aristotle, Nicomachean Ethics H Rackman (trans) (Harvard University Press, William Heinemann Ltd 1934)Google Scholar.

27 B Rennix and NJ Robinson, “The Trolly Problem Will Tell You Nothing Useful about Morality” (2017) Current Affairs, available at <www.currentaffairs.org/2017/11/the-trolley-problem-will-tell-you-nothing-useful-about-morality > accessed 14 November 2018.

28 Supra, note 26, NE, Book 6.

29 MacIntyre, A, After Virtue – a Study in Moral Theory (Oxford University Press 2000) p 154Google Scholar.

30 Dunne, J, Back to the Rough Ground – Practical Judgment and the Lure of Technique (University of Notre Dame Press 1993) p 275Google Scholar.

31 Supra, note 29, p 154.

32 Eikeland, O, The Ways of Aristotle – Aristotelian Phrónesis, Aristitelian Philosophy of Dialoge, and Action Research (Peter Lang AG 2008) p 53Google Scholar.

33 Vallor, S, Technology and the Virtues – a Philosophical Guide to a Future Worth Wanting (Oxford University Press 2016) p 76CrossRefGoogle Scholar.

34 ibid, p 77.

35 Habermas, J, The Theory of Communicative Action. Volume 1: Reason and the Rationalization of Society (Beacon 1984)Google Scholar.

36 Arendt, H, Lectures on Kant’s Political Philosophy (The University of Chicago Press 1992) p 43Google Scholar.

37 Bødker, S et al, “A UTOPIAN Experience: ‘On Design of Powerful Computer-Based Tools for Skilled Graphical Workers’” in G Bjerknes et al (eds), Computers and Democracy – a Scandinavian challenge (Gower Publishing 1987) p 251Google Scholar.

38 Brundage, M et al, The Malicious Use of AI – Forecasting, Prevention, and Mitigation (Future of Humanity Institute 2018), available at <arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf > accessed 14 November 2018Google Scholar.

39 ibid, pp 4–5.

40 See ACM Code of ethics and professional conduct at <www.acm.org/about-acm/acm-code-of-ethics-and-professional-conduct#imp1.3 > accessed 14 November 2018.

41 The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2. IEEE, 2017 <standards.ieee.org/industry-connections/ec/autonomous-systems.html > accessed 14 November 2018+accessed+14+November+2018>Google Scholar.

42 Boddington, P, Towards a Code of Ethics for Artificial Intelligence (Springer 2018) p 64Google Scholar.

43 ibid, 65.

44 Supra, note 33.

45 Supra, note 33, p 207.

46 Supra, note 33, p 206.

47 S Papert, Mindstorms – Children, Computers and Powerful Ideas (1980).

48 Later, Seymour Papert and Idit Harel wrote Constructionism Research Reports and Essays 1985–1990, Epistemology and Learning Research Group, the Media Lab, Massachusetts Institute of Technology (Ablex 1991)Google Scholar. Here, they emphasised ideas of playful learning, viz learning by making, in order to build mental models: “Constructionism – the N word as opposed to the V word – shares constructivism’s connotation of ‘learning as building knowledge structures’. Irrespective of the circumstances of the learning. It then adds the idea that this happens especially felicitously in a context where the learner is consciously engaged in constructing a public entity, whether it is a sand castle on the beach or a theory of the universe”: <namodemello.com.br/pdf/tendencias/situatingconstrutivism.pdf >, accessed 14 November 2018.

49 Supra, note 47, p 8.

50 Supra, note 47, p 4.

51 See eg O’Neil, C, Weapons of Math Destruction (Penguin Books 2016)Google Scholar.

52 Supra, note 33, p 117.

53 Supra, note 33, p 206.

54 Supra, note 33, p 154

55 Clark, A and Chalmers, D, “The Extended Mind” (1998) 58(1) Analysis 7CrossRefGoogle Scholar.

56 Supra, note 33, p 11.

57 Supra, note 42, p 68.