Skip to main content Accessibility help
×
Hostname: page-component-77c89778f8-gq7q9 Total loading time: 0 Render date: 2024-07-17T01:18:10.711Z Has data issue: false hasContentIssue false

22 - Modeling Morality with Prospective Logic

from PART IV - APPROACHES TO MACHINE ETHICS

Published online by Cambridge University Press:  01 June 2011

Michael Anderson
Affiliation:
University of Hartford, Connecticut
Susan Leigh Anderson
Affiliation:
University of Connecticut
Get access

Summary

Introduction

Morality no longer belongs only to the realm of philosophers. Recently, there has been a growing interest in understanding morality from the scientific point of view. This interest comes from various fields, for example, primatology (de Waal 2006), cognitive sciences (Hauser 2007; Mikhail 2007), neuroscience (Tancredi 2005), and other various interdisciplinary perspectives (Joyce 2006; Katz 2002). The study of morality also attracts the artificial intelligence community from the computational perspective and has been known by several names, including machine ethics, machine morality, artificial morality, and computational morality. Research on modeling moral reasoning computationally has been conducted and reported on, for example, at the AAAI 2005 Fall Symposium on Machine Ethics (Guarini 2005; Rzepka and Araki 2005).

There are at least two reasons to mention the importance of studying morality from the computational point of view. First, with the current growing interest to understand morality as a science, modeling moral reasoning computationally will assist in better understanding morality. Cognitive scientists, for instance, can greatly benefit in understanding complex interaction of cognitive aspects that build human morality; they may even be able to extract moral principles people normally apply when facing moral dilemmas. Modeling moral reasoning computationally can also be useful for intelligent tutoring systems, for instance, to aid in teaching morality to children. Second, as artificial agents are more and more expected to be fully autonomous and work on our behalf, equipping agents with the capability to compute moral decisions is an indispensable requirement.

Type
Chapter
Information
Machine Ethics , pp. 398 - 421
Publisher: Cambridge University Press
Print publication year: 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Alferes, J. J., Brogi, A., Leite, J. A., and Pereira, L. M. 2002. Evolving Logic Programs. Pages 50–61 of: Flesca, S., Greco, S., Leone, N., and Ianni, G. (eds), Procs. 8th European Conf. on Logics in Artificial Intelligence (JELIA'02). LNCS 2424. Springer.
Anderson, M., Anderson, S., and Armen, C. 2006. MedEthEx: A Prototype Medical Ethics Advisor. In: Procs. 18th Conf. on Innovative Applications of Artificial Intelligence (IAAI-06).
Bringsjord, S., Arkoudas, K., and Bello, P. 2006. Toward a General Logicist Methodology for Engineering Ethically Correct Robots. IEEE Intelligent Systems, 21(4), 38–44.CrossRefGoogle Scholar
Waal, F. 2006. Primates and Philosophers, How Morality Evolved. Princeton U.P.Google Scholar
Dell' Acqua, P., and Pereira, L. M. 2005. Preferential Theory Revision. Pages 69–84 of: Pereira, L. M., and Wheeler, G. (eds), Procs. Computational Models of Scientific Reasoning and Applications.
Dell' Acqua, P., and Pereira, L. M. 2007. Preferential Theory Revision (extended version). Journal of Applied Logic, 5(4):586–601CrossRefGoogle Scholar
Foot, P. 1967. The Problem of Abortion and the Doctrine of Double Effect. Oxford Review, 5, 5–15.Google Scholar
Gelfond, M., and Lifschitz, V. 1988. The Stable Model Semantics for Logic Programming. In: Kowalski, R., and Bowen, K. A. (eds), 5th Intl. Logic Programming Conf. MIT Press.
Gigerenzer, G., and Engel, C. (eds). 2006. Heuristics and the Law. MIT Press.
Guarini, M. 2005. Particularism and Generalism: How AI Can Help Us to Better Understand Moral Cognition. In: Anderson, M., Anderson, S., and Armen, C. (eds), Machine ethics: Papers from the AAAI Fall Symposium. AAAI Press.Google Scholar
Hauser, M. D. 2007. Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong. Little Brown.Google Scholar
Joyce, R. 2006. The Evolution of Morality. The MIT Press.Google Scholar
Kakas, A., Kowalski, R., and Toni, F. 1998. The Role of Abduction in Logic Programming. Pages 235–324 of: Gabbay, D., Hogger, C., and Robinson, J. (eds), Handbook of Logic in Artificial Intelligence and Logic Programming, vol. 5. Oxford U. P.Google Scholar
Kamm, F. M. 2006. Intricate Ethics: Rights, Responsibilities, and Permissible Harm. Oxford U. P.Google Scholar
Katz, L. D. (ed). 2002. Evolutionary Origins of Morality, Cross-Disciplinary Perspectives. Imprint Academic.
Kowalski, R. 2006. The Logical Way to be Artificially Intelligent. Page 122 of: Toni, F., and Torroni, P. (eds), Procs. of CLIMA VI, LNAI. Springer.Google Scholar
Lopes, G., and Pereira, L. M. 2006. Prospective Logic Programming with ACORDA. In: Procs. of the FLoC'06, Workshop on Empirically Successful Computerized Reasoning, 3rd Intl. J. Conf. on Automated Reasoning.
McLaren, B. M. 2006. Computational Models of Ethical Reasoning: Challenges, Initial Steps, and Future Directions. IEEE Intelligent Systems, 21(4), 29–37.CrossRefGoogle Scholar
Mikhail, J. 2007. Universal Moral Grammar: Theory, Evidence, and The Future. Trends in Cognitive Sciences, 11(4), 143–152.CrossRefGoogle ScholarPubMed
Otsuka, M. 2008. Double Effect, Triple Effect and the Trolley Problem: Squaring the Circle in Looping Cases. Utilitas, 20(1), 92–110.CrossRefGoogle Scholar
Pereira, L. M., and Lopes, G. 2007. Prospective Logic Agents. In: Neves, J. M., Santos, M. F., and Machado, J. M. (eds), Procs. 13th Portuguese Intl.Conf. on Artificial Intelligence (EPIA'07). Springer LNAI.Google Scholar
Pereira, L. M., and Saptawijaya, A. 2007. Moral Decision Making with ACORDA. In: Dershowitz, N., and Voronkov, A. (eds), Short papers call, Local Procs. 14th Intl. Conf. on Logic for Programming Artificial Intelligence and Reasoning (LPAR'07).
Powers, T. M. 2006. Prospects for a Kantian Machine. IEEE Intelligent Systems, 21(4), 46–51.CrossRefGoogle Scholar
Rzepka, R., and Araki, K. 2005. What Could Statistics Do for Ethics? The Idea of a Commonsense-Processing-Based Safety Valve. In: Anderson, M., Anderson, S., and Armen, C. (eds), Machine ethics: Papers from the AAAI Fall Symposium. AAAI Press.Google Scholar
Tancredi, L. 2005. Hardwired Behavior: What Neuroscience Reveals about Morality. Cambridge U. P.CrossRefGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×