Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-dzt6s Total loading time: 0 Render date: 2024-12-21T15:06:50.472Z Has data issue: false hasContentIssue false

31 - Computational Approaches to Morality

from Part IV - Computational Modeling in Various Cognitive Fields

Published online by Cambridge University Press:  21 April 2023

Ron Sun
Affiliation:
Rensselaer Polytechnic Institute, New York
Get access

Summary

Computational work on morality has emerged from two major sources – empirical moral science and philosophical ethics.Moral science has revealed a diversity of moral phenomena: moral behavior (including moral decision making), moral judgments, moral emotions, moral sanctions, moral communication.Philosophical ethics has long focused on moral decision making, and this is where most of the computational work has emerged. Much of it uses rule-based systems rooted in formal logic but is complemented by connectionist, case-based, and other approaches, and more recently by reinforcement learning models.Computational work on moral judgments is sparser, in part because moral judgments build on numerous complex mental capacities, such as causal and counterfactual reasoning and theory of mind. Nonetheless, some models of blame judgments have emerged that draw on information processing approaches from empirical moral science. Even less work has tackled moral emotions, sanctions, and communication – phenomena that present vast challenges and opportunities for future work.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2023

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Aarts, H., & Dijksterhuis, A. (2003). The silence of the library: environment, situational norm, and social behavior. Journal of Personality and Social Psychology, 84 (1), 1828. https://doi.org/10.1037/0022-3514.84.1.18Google Scholar
Abel, D., MacGlashan, J., & Littman, M. L. (2016). Reinforcement learning as a framework for ethical decision making. In AAAI Workshop: AI, Ethics, and Society, Volume WS-16-02 of 13th AAAI Workshops.Google Scholar
Alexander, J. C. (1987). The Micro-Macro Link. Oakland, CA: University of California Press.Google Scholar
Alicke, M. D. (2000). Culpable control and the psychology of blame. Psychological Bulletin, 126 (4), 556574. https://doi.org/10.1037//0033-2909.126.4.556CrossRefGoogle ScholarPubMed
Anderson, M., & Anderson, S. L. (2006). MedEthEx: a prototype medical ethics advisor. Paper presented at the 18th Conference on Innovative Applications of Artificial Intelligence.Google Scholar
Anderson, M., Anderson, S. L., & Armen, C. (2006). An approach to computing ethics. IEEE Intelligent Systems, 21 (4), 5663. https://doi.org/10.1109/MIS.2006.64Google Scholar
Andrighetto, G., Brandts, J., Conte, R., Sabater-Mir, J., Solaz, H., & Villatoro, D. (2013). Punish and voice: punishment enhances cooperation when combined with norm-signalling. PLoS One, 8(6). https://doi.org/10.1371/journal.pone.0064941CrossRefGoogle ScholarPubMed
Andrighetto, G., Castelfranchi, C., Mayor, E., McBreen, J., Lopez-Sanchez, M., & Parsons, S. (2013). (Social) norm dynamics. In Andrighetto, G., Governatori, G., Noriega, P., & van der Torre, L. W. N. (Eds.), Normative Multi-Agent Systems (Vol. 4, pp. 135170). Wadern: Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik. https://doi.org/10.4230/DFU.Vol4.12111.135Google Scholar
Andrighetto, G., Villatoro, D., & Conte, R. (2010). Norm internalization in artificial societies. AI Communications, 23 (4), 325339.CrossRefGoogle Scholar
Aquinas, T. (2003). On Law, Morality and Politics (Baumgarth, W. P., Ed.; R. J. Regan, Trans.; 2nd ed.). Indianapolis, IN: Hackett Publishing.Google Scholar
Arkin, R. C., & Ulam, P. (2009). An ethical adaptor: behavioral modification derived from moral emotions. In Proceedings of the 2009 IEEE International Symposium on Computational Intelligence in Robotics and Automation – (CIRA) (pp. 381–387). https://doi.org/10.1109/CIRA.2009.5423177Google Scholar
Arnold, T., Kasenberg, D., & Scheutz, M. (2017). Value alignment or misalignment – what will keep systems accountable? In The Workshops of the Thirty-First AAAI Conference on Artificial Intelligence: Technical Reports, WS-17-02: AI, Ethics, and Society (pp. 8188). Palo Alto, CA: The AAAI Press.Google Scholar
Ayars, A. (2016). Can model-free reinforcement learning explain deontological moral judgments? Cognition, 150, 232242. https://doi.org/10.1016/j.cognition.2016.02.002Google Scholar
Balafoutas, L., Nikiforakis, N., & Rockenbach, B. (2014). Direct and indirect punishment among strangers in the field. Proceedings of the National Academy of Sciences, 111 (45), 1592415927. https://doi.org/10.1073/pnas.1413170111CrossRefGoogle ScholarPubMed
Bartels, D. M., Bauman, C. W., Cushman, F. A., Pizarro, D. A., & McGraw, A. P. (2015). Moral judgment and decision making. In Koehler, D. J. & Harvey, N. (Eds.), The Wiley Blackwell Handbook of Judgment and Decision Making (pp. 478515). Oxford: John Wiley & Sons. https://doi.org/10.1002/9781118468333.ch17Google Scholar
Battaglino, C., Damiano, R., & Lesmo, L. (2013). Emotional range in value-sensitive deliberation. In Proceedings of the 2013 International Conference on Autonomous Agents and Multi-Agent Systems (pp. 769–776).Google Scholar
Battaglino, C., Damiano, R., & Lombardo, V. (2014). Moral values in narrative characters: an experiment in the generation of moral emotions. In Mitchell, A., Fernández-Vara, C., & Thue, D. (Eds.), Interactive Storytelling (pp. 212215). Cham: Springer International Publishing.Google Scholar
Bauer, W. A. (2020). Virtuous vs. utilitarian artificial moral agents. AI & Society, 35 (1), 263271. https://doi.org/10.1007/s00146-018-0871-3Google Scholar
Benzmüller, C. (2019). Universal (meta-)logical reasoning: recent successes. Science of Computer Programming, 172, 4862. https://doi.org/10.1016/j.scico.2018.10.008Google Scholar
Bicchieri, C. (2006). The Grammar of Society: The Nature and Dynamics of Social Norms. Cambridge: Cambridge University Press.Google Scholar
Bordini, R. H., Hübner, J. F., & Wooldridge, M. (2007). Programming Multi-Agent Systems in Agentspeak Using Jason. Oxford: John Wiley & Sons.Google Scholar
Bratman, M. E. (1987). Intention, Plans, and Practical Reason. Cambridge, MA: Harvard University Press.Google Scholar
Brennan, G., Eriksson, L., Goodin, R. E., & Southwood, N. (2013). Explaining Norms. Oxford: Oxford University Press.Google Scholar
Bretz, S., & Sun, R. (2018). Two models of moral judgment. Cognitive Science, 42, 437. https://doi.org/10.1111/cogs.12517Google Scholar
Bringsjord, S., & Taylor, J. (2012). The divine-command approach to robot ethics. In Lin, P., Abney, K., & Bekey, G. A. (Eds.), Robot Ethics: The Ethical and Social Implications of Robotics (pp. 85108). Cambridge, MA: MIT Press.Google Scholar
Broeders, R., van den Bos, K., Müller, P. A., & Ham, J. (2011). Should I save or should I not kill? How people solve moral dilemmas depends on which rule is most accessible. Journal of Experimental Social Psychology, 47 (5), 923934. https://doi.org/10.1016/j.jesp.2011.03.018CrossRefGoogle Scholar
Brundage, M. (2014). Limitations and risks of machine ethics. Journal of Experimental & Theoretical Artificial Intelligence, 26 (3), 355372.Google Scholar
Buckholtz, J. W., Martin, J. W., Treadway, M. T., et al. (2015). From blame to punishment: disrupting prefrontal cortex activity reveals norm enforcement mechanisms. Neuron, 87 (6), 13691380. https://doi.org/10.1016/j.neuron.2015.08.023Google Scholar
Carmo, J., & Jones, A. J. I. (2002). Deontic logic and contrary-to-duties. In Gabbay, D. M. & Guenthner, F. (Eds.), Handbook of Philosophical Logic (Vol. 8, pp. 265343). Cham: Springer. https://doi.org/10.1007/978-94-010-0387-2_4Google Scholar
Castelfranchi, C., Dignum, F., Jonker, C. M., & Treur, J. (2000). Deliberative normative agents: principles and architecture. In Jennings, N. R. & Lespérance, Y. (Eds.), Intelligent Agents VI. Agent Theories, Architectures, and Languages (pp. 364378). Cham: Springer. https://doi.org/10.1007/10719619_27Google Scholar
Cerulo, K. A., & Ruane, J. M. (2014). Apologies of the rich and famous: cultural, cognitive, and social explanations of why we care and why we forgive. Social Psychology Quarterly, 77 (2), 123149.CrossRefGoogle Scholar
Cervantes, J.-A., Rodríguez, L.-F., López, S., Ramos, F., & Robles, F. (2016). Autonomous agents and ethical decision-making. Cognitive Computation, 8 (2), 278296. https://doi.org/10.1007/s12559-015-9362-8Google Scholar
Christensen, J. F., & Gomila, A. (2012). Moral dilemmas in cognitive neuroscience of moral decision-making: a principled review. Neuroscience & Biobehavioral Reviews, 36 (4), 12491264. https://doi.org/10.1016/j.neubiorev.2012.02.008Google Scholar
Cialdini, R. B., Kallgren, C. A., & Reno, R. R. (1991). A focus theory of normative conduct: a theoretical refinement and reevaluation of the role of norms in human behavior. In Zanna, M. P. (Ed.), Advances in Experimental Social Psychology (Vol. 24, pp. 201234). New York, NY: Academic Press.Google Scholar
Conitzer, V., Sinnott-Armstrong, W., Borg, J. S., Deng, Y., & Kramer, M. (2017). Moral decision making frameworks for artificial intelligence. In Proceedings of the 31st AAAI Conference on Artificial Intelligence (pp. 4831–4835). AAAI Press.Google Scholar
Conte, R., Andrighetto, G., & Campenni, M. (2013). Minding Norms: Mechanisms and Dynamics of Social Order in Agent Societies. New York, NY: Oxford University Press.CrossRefGoogle Scholar
Coricelli, G., Rusconi, E., & Villeval, M. C. (2014). Tax evasion and emotions: an empirical test of re-integrative shaming theory. Journal of Economic Psychology, 40, 4961. https://doi.org/10.1016/j.joep.2012.12.002CrossRefGoogle Scholar
Crockett, M. J. (2013). Models of morality. Trends in Cognitive Sciences, 17 (8), 363366. https://doi.org/10.1016/j.tics.2013.06.005CrossRefGoogle ScholarPubMed
Curry, O. S., Mullins, D. A., & Whitehouse, H. (2019). Is it good to cooperate? Testing the theory of morality-as-cooperation in 60 societies. Current Anthropology, 60 (1), 4769. https://doi.org/10.1086/701478Google Scholar
Cushman, F. (2008). Crime and punishment: distinguishing the roles of causal and intentional analyses in moral judgment. Cognition, 108 (2), 353380. https://doi.org/10.1016/j.cognition.2008.03.006CrossRefGoogle ScholarPubMed
Cushman, F. (2013). Action, outcome, and value: a dual-system framework for morality. Personality and Social Psychology Review, 17 (3), 273292. https://doi.org/10.1177/1088868313495594Google Scholar
Cushman, F., Young, L., & Greene, J. D. (2010). Multi-system moral psychology. In Doris, J. M. (Ed.), The Moral Psychology Handbook (pp. 4771). Oxford: Oxford University Press.Google Scholar
Danaher, J. (2020). Robot betrayal: a guide to the ethics of robotic deception. Ethics and Information Technology, 22 (2), 117128. https://doi.org/10.1007/s10676-019-09520-3Google Scholar
Dancy, J. (2009). Moral particularism. In Zalta, E. N. (Ed.), Stanford Encyclopedia of Philosophy. Center for the Study of Language and Information, Stanford University. https://plato.stanford.edu/entries/moral-particularism/ [last accessed July 27, 2022].Google Scholar
Dastani, M. (2008). 2APL: a practical agent programming language. Autonomous Agents and Multi-Agent Systems, 16 (3), 214248. https://doi.org/10.1007/s10458-008-9036-yGoogle Scholar
Dennis, L., Fisher, M., Slavkovik, M., & Webster, M. (2016). Formal verification of ethical choices in autonomous systems. Robotics and Autonomous Systems, 77, 114. https://doi.org/10.1016/j.robot.2015.11.012Google Scholar
D’Inverno, M., Luck, M., Georgeff, M., Kinny, D., & Wooldridge, M. (2004). The dMARS architecture: a specification of the distributed multi-agent reasoning system. Autonomous Agents and Multi-Agent Systems, 9 (1), 553. https://doi.org/10.1023/B:AGNT.0000019688.11109.19Google Scholar
Dung, P. M. (1995). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence, 77 (2), 321357. https://doi.org/10.1016/0004-3702(94)00041-XCrossRefGoogle Scholar
Eisenberg, N. (2000). Emotion, regulation, and moral development. Annual Review of Psychology, 51, 665697.Google Scholar
Fehr, E., & Fischbacher, U. (2004). Social norms and human cooperation. Trends in Cognitive Sciences, 8 (4), 185190. https://doi.org/10.1016/j.tics.2004.02.007Google Scholar
Ferreira, N., Mascarenhas, S., Paiva, A., et al. (2013). An agent model for the appraisal of normative events based in in-group and out-group relations. In AAAI Conference on Artificial Intelligence.Google Scholar
Foot, P. (1967). The problem of abortion and the doctrine of double effect. Oxford Review, 5, 515.Google Scholar
Forbus, K. D., Gentner, D., & Law, K. (1995). MAC/FAC: a model of similarity-based retrieval. Cognitive Science, 19 (2), 141205. https://doi.org/10.1207/s15516709cog1902_1Google Scholar
Francis, K. B., Howard, C., Howard, I. S., et al. (2016). Virtual morality: transitioning from moral judgment to moral action? PLoS One, 11 (10), e0164374. https://doi.org/10.1371/journal.pone.0164374CrossRefGoogle ScholarPubMed
Gibbs, J. P. (1965). Norms: the problem of definition and classification. American Journal of Sociology, 70 (5), 586594. https://doi.org/10.1086/223933Google Scholar
Goble, L. (2003). Preference semantics for deontic logic. Part I – simple models. Logique et Analyse, 46 (183/184), 383418.Google Scholar
Gold, N., Pulford, B. D., & Colman, A. M. (2015). Do as I Say, Don’t Do as I Do: differences in moral judgments do not translate into differences in decisions in real-life trolley problems. Journal of Economic Psychology, 47, 5061. https://doi.org/10.1016/j.joep.2015.01.001CrossRefGoogle Scholar
Govindarajulu, N. S., & Bringsjord, S. (2017). On automating the doctrine of double effect. In Proceedings of the International Joint Conference on AI (IJCAI 2017) (pp. 4722–4730).CrossRefGoogle Scholar
Govindarajulu, N. S., Bringsjord, S., & Peveler, M. (2019). On quantified modal theorem proving for modeling ethics. Electronic Proceedings in Theoretical Computer Science, 311, 4349. https://doi.org/10.4204/EPTCS.311.7Google Scholar
Greene, J. D. (2007). Why are VMPFC patients more utilitarian? A dual-process theory of moral judgment explains. Trends in Cognitive Sciences, 11 (8), 322323. https://doi.org/10.1016/j.tics.2007.06.004Google Scholar
Greene, J. D., Cushman, F. A., Stewart, L. E., Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2009). Pushing moral buttons: the interaction between personal force and intention in moral judgment. Cognition, 111 (3), 364371. https://doi.org/10.1016/j.cognition.2009.02.001Google Scholar
Greene, J. D., Morelli, S. A., Lowenberg, K., Nystrom, L. E., & Cohen, J. D. (2008). Cognitive load selectively interferes with utilitarian moral judgment. Cognition, 107 (3), 11441154. https://doi.org/10.1016/j.cognition.2007.11.004Google Scholar
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293 (5537), 21052108. https://doi.org/10.1126/science.1062872Google Scholar
Guala, F. (2012). Reciprocity: weak or strong? What punishment experiments do (and do not) demonstrate. Behavioral and Brain Sciences, 35 (1), 115. https://doi.org/10.1017/S0140525X11000069Google Scholar
Guarini, M. (2007). Computation, coherence, and ethical reasoning. Minds and Machines, 17 (1), 2746. https://doi.org/10.1007/s11023-007-9056-4CrossRefGoogle Scholar
Guarini, M. (2010). Particularism, analogy, and moral cognition. Minds and Machines, 20 (3), 385422. https://doi.org/10.1007/s11023-010-9200-4Google Scholar
Guglielmo, S. (2015). Moral judgment as information processing: an integrative review. Frontiers in Psychology, 6. https://doi.org/10.3389/fpsyg.2015.01637Google Scholar
Gürçay, B., & Baron, J. (2017). Challenges for the sequential two-system model of moral judgement. Thinking & Reasoning, 23 (1), 4980. https://doi.org/10.1080/13546783.2016.1216011Google Scholar
Haas, J. (2020). Moral gridworlds: a theoretical proposal for modeling artificial moral cognition. Minds and Machines, 30 (2), 219246. https://doi.org/10.1007/s11023-020-09524-9Google Scholar
Hadfield-Menell, D., Russell, S. J., Abbeel, P., & Dragan, A. (2016). Cooperative inverse reinforcement learning. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., & Garnett, R. (Eds.), Advances in Neural Information Processing Systems 29 (pp. 39093917). Red Hook, NY: Curran Associates.Google Scholar
Haidt, J. (2001). The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychological Review, 108 (4), 814834. https://doi.org/10.1037/0033-295X.108.4.814CrossRefGoogle ScholarPubMed
Halpern, J. Y., & Kleiman-Weiner, M. (2018). Towards formal definitions of blameworthiness, intention, and moral responsibility. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence.Google Scholar
Hechter, M., & Opp, K.-D. (Eds.). (2001). Social Norms. New York, NY: Russell Sage Foundation.Google Scholar
Heider, F. (1958). The Psychology of Interpersonal Relations. Oxford: Wiley.Google Scholar
Holyoak, K. J., & Powell, D. (2016). Deontological coherence: a framework for commonsense moral reasoning. Psychological Bulletin, 142 (11), 11791203. https://doi.org/10.1037/bul0000075Google Scholar
Howard, D., & Muntean, I. (2017). Artificial moral cognition: moral functionalism and autonomous moral agency. In Powers, T. (Ed.), Philosophy and Computing (pp. 121159). Cham: Springer. https://doi.org/10.1007/978-3-319-61043-6_7Google Scholar
Kasenberg, D., Roque, A., Thielstrom, R., Chita-Tegmark, M., & Scheutz, M. (2019). Generating justifications for norm-related agent decisions. In 12th International Conference on Natural Language Generation (INLG), Tokyo, Japan.Google Scholar
Kasenberg, D., & Scheutz, M. (2018). Norm conflict resolution in stochastic domains. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (pp. 85–92).Google Scholar
Kleiman-Weiner, M., Gerstenberg, T., Levine, S., & Tenenbaum, J. B. (2015). Inference of intention and permissibility in moral decision making. In Proceedings of the 37th Annual Conference of the Cognitive Science Society (pp. 1123–1128). Cognitive Science Society.Google Scholar
Kohlberg, L. (1984). The Psychology of Moral Development: The Nature and Validity of Moral Stages. New York, NY: Harper & Row.Google Scholar
Kowalczuk, Z., & Czubenko, M. (2016). Computational approaches to modeling artificial emotion – an overview of the proposed solutions. Frontiers in Robotics and AI, 3. https://doi.org/10.3389/frobt.2016.00021Google Scholar
Laurent, S. M., Nuñez, N. L., & Schweitzer, K. A. (2016). Unintended, but still blameworthy: the roles of awareness, desire, and anger in negligence, restitution, and punishment. Cognition & Emotion, 30 (7), 12711288. https://doi.org/10.1080/02699931.2015.1058242Google Scholar
Leben, D. (2017). A Rawlsian algorithm for autonomous vehicles. Ethics and Information Technology, 19 (2), 107115.Google Scholar
Levine, S., Kleiman-Weiner, M., Schulz, L., Tenenbaum, J. B., & Cushman, F. A. (2020). The logic of universalization guides moral judgment [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/p7e6hGoogle Scholar
Levine, S., Leslie, A. M., & Mikhail, J. (2018). The mental representation of human action. Cognitive Science, 42 (4), 12291264. https://doi.org/10.1111/cogs.12608Google Scholar
Lindenberg, S. (2013). How cues in the environment affect normative behaviour. In Steg, L., van den Berg, A. E., & de Groot, J. I. M. (Eds.), Environmental Psychology: An Introduction (pp. 119128). Oxford: BPS/Blackwell.Google Scholar
Malle, B. F. (2020). Graded representations of norm strength. In Denison, S., Mack, M., Xu, Y., & Armstrong, B. C. (Eds.), Proceedings of the 42nd Annual Meeting of the Cognitive Science Society (pp. 33423348). Cognitive Science Society.Google Scholar
Malle, B. F. (2021). Moral judgments. Annual Review of Psychology, 72. https://doi.org/10.1146/annurev-psych-072220-104358Google Scholar
Malle, B. F., Guglielmo, S., & Monroe, A. E. (2014). A theory of blame. Psychological Inquiry, 25 (2), 147186. https://doi.org/10.1080/1047840X.2014.877340Google Scholar
Malle, B. F., Rosen, E., Chi, V. B., Berg, M., & Haas, P. (2020). A general methodology for teaching norms to social robots. In Proceedings of the 29th International Conference on Robot & Human Interactive Communication.Google Scholar
Malle, B. F., Scheutz, M., & Austerweil, J. L. (2017). Networks of social and moral norms in human and robot agents. In Aldinhas Ferreira, M. I., Silva Sequeira, J., Tokhi, M. O., Kadar, E. E., & Virk, G. S. (Eds.), A World with Robots: International Conference on Robot Ethics: ICRE 2015 (pp. 317). Cham: Springer International Publishing. http://dx.doi.org/10.1007/978-3-319-46667-5_1Google Scholar
Mao, W., & Gratch, J. (2012). Modeling social causality and responsibility judgment in multi-agent interactions. Journal of Artificial Intelligence Research, 44, 223273.Google Scholar
Marcus, G., & Davis, E. (2019). Rebooting AI: Building Artificial Intelligence We Can Trust. New York, NY: Pantheon.Google Scholar
McLaren, B. M. (2006). Computational models of ethical reasoning: challenges, initial steps, and future directions. IEEE Intelligent Systems, 21, 2937.CrossRefGoogle Scholar
Meyer, J. J. Ch., Broersen, J. M., & Herzig, A. (2015). BDI Logics. In van Ditmarsch, H., Halpern, J. Y., van der Hoek, W., & Kooi, B. (Eds.), Handbook of Logics of Knowledge and Belief (pp. 453498). Rickmansworth: College Publications. https://dspace.library.uu.nl/handle/1874/315954Google Scholar
Mikhail, J. (2008). Moral cognition and computational theory. In Sinnott-Armstrong, W. (Ed.), Moral Psychology, Vol. 3: The Neuroscience of Morality (pp. 8192). Cambridge, MA: MIT Press.Google Scholar
Ohtsubo, Y., Matsunaga, M., Tanaka, H., et al. (2018). Costly apologies communicate conciliatory intention: an fMRI study on forgiveness in response to costly apologies. Evolution and Human Behavior, 39 (2), 249256. https://doi.org/10.1016/j.evolhumbehav.2018.01.004Google Scholar
Ortony, A., Clore, G. L., & Collins, A. (1988). The Cognitive Structure of Emotions. Cambridge: Cambridge University Press.Google Scholar
Pearl, J., & Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect (1st ed.). New York, NY: Basic Books.Google Scholar
Pereira, L. M., & Saptawijaya, A. (2017). Counterfactuals, logic programming and agent morality. In Urbaniak, R. & Payette, G. (Eds.), Applications of Formal Philosophy: The Road Less Travelled (pp. 2553). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-58507-9_3Google Scholar
Powers, T. M. (2006). Prospects for a Kantian machine. IEEE Intelligent Systems, 21 (4), 4651. https://doi.org/10.1109/MIS.2006.77Google Scholar
Prakken, H., & Sergot, M. (1997). Dyadic deontic logic and contrary-to-duty obligations. In Nute, D. (Ed.), Defeasible Deontic Logic (pp. 223262). Cham: Springer. https://doi.org/10.1007/978-94-015-8851-5_10Google Scholar
Prinz, J. (2006). The emotional basis of moral judgments. Philosophical Explorations, 9 (1), 2943. https://doi.org/10.1080/13869790500492466Google Scholar
Quinn, P. L. (1978). Divine Commands and Moral Requirements. Oxford: Clarendon Press.Google Scholar
Rao, A. S. (1996). AgentSpeak(L): BDI agents speak out in a logical computable language. In Van de Velde, W. & Perram, J. W. (Eds.), Agents Breaking Away (pp. 4255). Cham: Springer.Google Scholar
Rao, A. S., & Georgeff, M. P. (1991). Modeling rational agents within a BDI-architecture. In Proceedings of the Second International Conference on Principles of Knowledge Representation and Reasoning (pp. 473–484). http://dl.acm.org/citation.cfm?id=3087158.3087205Google Scholar
Realpe-Gómez, J., Andrighetto, G., Nardin, L. G., & Montoya, J. A. (2018). Balancing selfishness and norm conformity can explain human behavior in large-scale prisoner’s dilemma games and can poise human groups near criticality. Physical Review E, 97 (4), 042321. https://doi.org/10.1103/PhysRevE.97.042321Google Scholar
Rosales, J.-H., Rodríguez, L.-F., & Ramos, F. (2019). A general theoretical framework for the design of artificial emotion systems in Autonomous Agents. Cognitive Systems Research, 58, 324341. https://doi.org/10.1016/j.cogsys.2019.08.003Google Scholar
Rosen, E., Hsiung, E., Chi, V. B., & Malle, B. F. (2022). Norm learning with reward models from instructive and evaluative feedback. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2022). Piscataway, NJ: IEEE.Google Scholar
Ross, W. D. (1930). The Right and the Good. Oxford: Oxford University Press.Google Scholar
Royzman, E. B., Goodwin, G. P., & Leeman, R. F. (2011). When sentimental rules collide: “norms with feelings” in the dilemmatic context. Cognition, 121 (1), 101114. https://doi.org/10.1016/j.cognition.2011.06.006Google Scholar
Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. New York, NY: Viking.Google Scholar
Sachdeva, S., Singh, P., & Medin, D. (2011). Culture and the quest for universal principles in moral reasoning. International Journal of Psychology, 46 (3), 161176. https://doi.org/10.1080/00207594.2011.568486CrossRefGoogle ScholarPubMed
Santos, J. S., Zahn, J. O., Silvestre, E. A., Silva, V. T., & Vasconcelos, W. W. (2017). Detection and resolution of normative conflicts in multi-agent systems: a literature survey. Autonomous Agents and Multi-Agent Systems, 31 (6), 12361282. https://doi.org/10.1007/s10458-017-9362-zGoogle Scholar
Sauer, H. (2012). Morally irrelevant factors: what’s left of the dual process-model of moral cognition? Philosophical Psychology, 25 (6), 783811. https://doi.org/10.1080/09515089.2011.631997Google Scholar
Scanlon, T. (1998). What We Owe to Each Other (Issue 1, pp. 169175). Cambridge, MA: Harvard University Press.Google Scholar
Schaich Borg, J., Hynes, C., Van Horn, J., Grafton, S., & Sinnott-Armstrong, W. (2006). Consequences, action, and intention as factors in moral judgments: an fMRI investigation. Journal of Cognitive Neuroscience, 18 (5), 803817. https://doi.org/10.1162/jocn.2006.18.5.803Google Scholar
Shams, Z., Vos, M. D., Oren, N., & Padget, J. (2020). Argumentation-based reasoning about plans, maintenance goals, and norms. ACM Transactions on Autonomous and Adaptive Systems, 14 (3), 9:1–9:39. https://doi.org/10.1145/3364220Google Scholar
Shaver, K. G. (1985). The Attribution of Blame: Causality, Responsibility, and Blameworthiness. New York, NY: Springer Verlag.Google Scholar
Shoham, Y., & Tennenholtz, M. (1995). On social laws for artificial agent societies: off-line design. Artificial Intelligence, 73 (1–2), 231252. https://doi.org/10.1016/0004-3702(94)00007-NGoogle Scholar
Shultz, T. R. (1987). A computational model of causation, responsibility, blame, and punishment. Meeting of the Society for Research in Child Development, Baltimore, MD.Google Scholar
Sileno, G., Saillenfest, A., & Dessalles, J.-L. (2017). A computational model of moral and legal responsibility via simplicity theory. In Wyner, A. & Casini, G. (Eds.), Legal Knowledge and Information Systems (pp. 171176). Clifton, VA: IOS Press. http://ebooks.iospress.nl/publication/48059Google Scholar
Slocum, D., Allan, A., & Allan, M. M. (2011). An emerging theory of apology. Australian Journal of Psychology, 63 (2), 8392. https://doi.org/10.1111/j.1742-9536.2011.00013.xGoogle Scholar
Sripada, C. S., & Stich, S. (2006). A framework for the psychology of norms. In Carruthers, P., Laurence, S., & Stich, S. (Eds.), The Innate Mind (Vol. 2: Culture and Cognition) (pp. 280301). Oxford: Oxford University Press.Google Scholar
Stallen, M., Rossi, F., Heijne, A., Smidts, A., De Dreu, C. K. W., & Sanfey, A. G. (2018). Neurobiological mechanisms of responding to injustice. The Journal of Neuroscience, 38 (12), 29442954. https://doi.org/10.1523/JNEUROSCI.1242-17.2018Google Scholar
Tangney, J. P., & Dearing, R. L. (2002). Shame and Guilt. New York, NY: Guilford Press.Google Scholar
Thagard, P. (1998). Ethical coherence. Philosophical Psychology, 11 (4), 405422. https://doi.org/10.1080/09515089808573270Google Scholar
Turiel, E. (2002). The Culture of Morality: Social Development, Context, and Conflict. Cambridge: Cambridge University Press.Google Scholar
Ullmann-Margalit, E. (1977). The Emergence of Norms. Oxford: Clarendon Press.Google Scholar
van der Torre, L. W. N., & Tan, Y.-H. (1997). The many faces of defeasibility in defeasible deontic logic. In Nute, D. (Ed.), Defeasible Deontic Logic (pp. 79121). Cham: Springer. https://doi.org/10.1007/978-94-015-8851-5_5Google Scholar
Von Wright, G. H. (1951). Deontic logic. Mind, LX (237), 115. https://doi.org/10.1093/mind/LX.237.1Google Scholar
Watanabe, S., & Laurent, S. M. (2020). Feeling bad and doing good: forgivability through the lens of uninvolved third parties. Social Psychology, 51 (1), 3549. https://doi.org/10.1027/1864-9335/a000390Google Scholar
Weiner, B. (2001). Responsibility for social transgressions: an attributional analysis. In Malle, B. F., Moses, L. J., & Baldwin, D. A. (Eds.), Intentions and Intentionality: Foundations of Social Cognition (pp. 331344). Cambridge, MA: MIT Press.Google Scholar
Zinchenko, O. (2019). Brain responses to social punishment: a meta-analysis. Scientific Reports, 9. https://doi.org/10.1038/s41598-019-49239-1Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×