Skip to main content Accessibility help
Hostname: page-component-564cf476b6-4htn5 Total loading time: 0.204 Render date: 2021-06-22T11:05:47.075Z Has data issue: true Feature Flags: { "shouldUseShareProductTool": true, "shouldUseHypothesis": true, "isUnsiloEnabled": true, "metricsAbstractViews": false, "figures": true, "newCiteModal": false, "newCitedByModal": true, "newEcommerce": true }

Humane Driving

Published online by Cambridge University Press:  25 February 2021

Vaughan Black
Vaughan Black, Retired Professor, Dalhousie University.
Andrew Fenton
Andrew Fenton, Associate Professor, Dalhousie University.
E-mail address:
Get access


The advent of fully autonomous vehicles (AVs) raises many questions. While some appear purely technical, others engage matters of public policy, sometimes with a prominent ethical component. Our expertise lies in law and applied ethics, and so our inquiry will focus heavily on legal or ethical issues arising from a widespread adoption of AVs and their response in emergency situations involving animals. By “animals” we do not mean all other animals than humans. We will concentrate on interest-bearing animals. These are animals who possess a welfare or wellbeing, animals whose lives can fare well or badly from their perspective.

Research Article
© The Author(s), 2021

Access options

Get access to the full version of this content by using one of the access options below.


We thank Letitia Meynell and an anonymous reviewer for comments on a draft.


1. A view explored in Duncan Purves, Ryan Jenkins & Bradley J Strawser, “Autonomous Machines, Moral Judgment and Acting for the Right Reasons” (2014) 18:4 Ethical Theory & Moral Practice 851.

2. See, e. g., “Automobile Insurance in the Age of Autonomous Vehicles” (2015), online (pdf) at KPMG automobile-insurance-in-the-era-of-autonomous-vehicles-survey-results-june-2015.pdf []; SA Beiker “Legal Aspects of Autonomous Driving” (2012) 52:4 Santa Clara L Rev 1145 at 1149-50; U Eberl, Smarte Maschinen (Carl Hanser Verlag, 2016) 174; US Department of Transportation: National Highway Traffic Safety Administration, Critical Reasons for Crashes Investigated in the National Motor Vehicle Crash Causation Survey (National Center for Statistics and Analysis, 2015); Leonard Evans, “The Dominant Role of Driver Behavior in Traffic Safety” (1996) 86:6 American J of Public Health 784.

3. We have no doubt that the inspiration for such an approach lies with Regan. Tom Regan, “The Case for Animal Rights” in Peter Singer, ed, In Defense of Animals (Harper & Row, 1985) 13 at 13-26.

4. Introduced in Philippa Foot, “The Problem of Abortion and the Doctrine of the Double Effect” (1967) 5 Ox Rev 5, the trolley problem gained greater prominence in Judith J Thomson, “Killing, Letting Die and the Trolley Problem” (1976) 59 The Monist 204.

5. Sven Nyholm & Jilles Smids, “The Ethics of Accident-Algorithms for Self-Driving Cars: An Applied Trolley Problem?” (2016) 19 Ethical Theory & Moral Practice 1275. One of these is that in the trolley problem the choice must be made under the pressure of emergent circumstances, with no chance for deliberation. In contrast, in the AV situation the decision can be made in advance, and after wide consultation. A second difference is that in the trolley problem all the outcomes are distinctly specified in advance; they are known to the chooser. In the AV problem, there is uncertainty, incomplete information, and risk: swerving into the tree could possibly kill the AV’s occupants, but might only injure them. This last point is explored in Björn Meder et al, “How Should Autonomous Cars Drive? A Preference for Defaults in Moral Judgments Under Risk and Uncertainty” (2019) 39:2 Risk Analysis 205.

6. Or that if they do arise, the AV will not have enough information to intelligently choose, so should just be programmed to brake and hope for the best: Rebecca Davnall, “Solving the Single-Vehicle Self-Driving Car Trolley Problem Using Risk Theory and Vehicle Dynamics” (2019) 25:2 Science & Engineering Ethics 431.

7. Bryan Casey, “Amoral Machines, or: How Roboticists Can Learn to Stop Worrying and Love the Law” (2017) 111 Nw UL Rev 231. For an argument that Casey is wrong, and that ethical thinking has a big role to play in this domain, see W Bradley Wendell, “Economic Rationality and Ethical Values in Design-Defect Analysis: The Trolley Problem and Autonomous Vehicles” (2018) 55:1 Cal W L Rev 129.

8. Samuel I Schwartz with Karen Kelly, No One at the Wheel: Driverless Cars and the Road of the Future (Public Affairs, 2018) at 168.

9. This last option is considered in Sabine Glass, Emily Silverman & Thomas Weigend, “If Robots Cause Harm Who is to Blame? Self-Driving Cars and Criminal Liability” (2016) 19 New Crim L Rev 412 and John Zipp, “The Road Will Never Be the Same: A Reexamination of Tort Liability for Autonomous Vehicles” (2016) 43 Transportation LJ 137. In the civil sphere it is more likely that the advent of AVs will cause more jurisdictions to adopt a no-fault regime for vehicle accidents. The operation of tort law in respect of personal injury and property damage caused by automobiles will be legislatively eliminated and replaced with a no-fault compensation scheme similar to that already in place in Quebec.

10. Christoph Luetge, “The German Ethics Code for Automated and Connected Driving” (2017) 30:4 Philosophy & Technology 547 at 548. They reported the following year. For an English translation see Germany, Federal Ministry of Transport and Digital Infrastructure, Ethics Commission, Automated and Connected Driving (2017), online (pdf) at (German Ethics Commission) [].

11. A few writers note that the being who occasions a whom-to-harm scenario might be a nonhuman, but they do not pursue the consequences of this. E.g. Nick Belay, “Robot Ethics and Self-Driving Cars: How Ethical Determinations in Software Will Require a New Legal Framework” (2015) 40:1 J Legal Prof 119 at 121-22 and Hod Lipman & Melba Kurman, Driverless: Intelligent Cars and the Road Ahead (MIT Press, 2016) 252.

12. Sarah Zielinksi, “Climate Change Will Accelerate Earth’s Sixth Mass Extinction” (2015), online at Smithsonian Magazine [].

13. Baker v Harmina, 2018 NLCA 15 at paras 48-52, Hoegg JA, dissenting in part.

14. Art 898.1 CCQ.

15. Andrew Fenton, “Decisional Authority and Animal Research Subjects” in Kristin Andrews & Jacob Beck, eds, The Routledge Handbook of Philosophy of Animal Minds (Routledge, 2018) 475.

16. Belay, supra note 11 at 129 (advocating that all AVs be so programmed).

17. Jean-François Bonnefon, Azim Shariff & Iyad Rahwan, “The Social Dilemma of Autonomous Vehicles” (2016), 35 Science 1573 offers the results of an empirical study that shows that most people would like others to buy AVs that had a Utilitarian algorithmic morality, but themselves would prefer to acquire AVs that protect their occupants at all costs. Against that there is some empirical evidence that when whom-to-harm situations do arise on the road human drivers make decisions that seem informed by a Utilitarian outlook: Anja K Faulhaber et al, “Human Decisions in Moral Dilemmas are Largely Described by Utilitarianism: Virtual Car Driving Study Provides Guidelines for Autonomous Driving Vehicles” (2019) 25 Science & Engineering Ethics 399. None of the hypotheticals employed in this study involved nonhumans.

18. The title of a recent article suggests yet another variation: Derek Leben, “A Rawlsian Algorithm for Autonomous Vehicles” (2017) 19:2 Ethics & Information Technology 107. Driving behind a veil of ignorance strikes us as problematic. In an even lighter vein Adam Gopnik speculates about a Nietzschean version which would drive right over anything, and also software inspired by Albert Camus “which would stall and pause in the middle of the highway while the traffic backs up behind—and then suddenly shoot off, bang, because the existential leap must be made, and some pedal struck.” A Gopnik, “A Point of View: The Ethics of the Driverless Car” (2014), online at BBC News online [].

19. Presumably police cars, fire engines and ambulances will have software that differs from that mandated for private vehicles—for example, by permitting breach of speed limits and some other traffic regulations.

20. We sidestep the question here of whether the legal response to this question will or should be imposed at the national level or provincially. That will depend on as-yet unresolved questions of constitutional law. Present laws governing vehicles in Canada exist at both the federal and provincial levels, and questions relating to the advent of AVs are being discussed at both levels. While a uniform national response seems simpler, provincial variation is possible. AVs equipped with GPS can tell when a provincial border has been crossed and adjust accordingly. Indeed, these will be necessary, since traffic laws already differ provincially—e.g., whether a right turn is permitted on a red light. So AVs’ software will already have to respond to legal differences arising from crossing provincial borders. That being so, there could be software that responds differently in a whom-to-harm situation depending on the law on that matter applicable in the place where the vehicle happens to be at the time.

21. The view that the software for whom-to-harm problems will not be left to individual manufacturers or AV owners is broadly accepted: Ivó Coca-Vila, “Self-Driving Cars in Dilemmatic Situations: An Approach Based on the Theory of Justification in Criminal Law” (2018) 12:1 Crim L & Philosophy 59 at 61-63; Jan Gogoll & Julian Müller, “Autonomous Cars: In Favour of a Mandatory Ethics Setting” (2017) 23:3 Science & Engineering Ethics 681; Belay, supra note 11 at 129.

22. We suspect that they swerve less frequently for animals than for humans. A study that asked American college students whether, as drivers, they would swerve to avoid killing a pedestrian, even at the risk of injury to themselves, elicited the following responses: 94% would swerve to avoid hitting a human child, 84% would swerve for an adult, and 27% for an (unspecified) animal: Katherine Brunk, “The Pedestrian Safety Problem and the Ethical Implications in the Age of Autonomous Vehicles” (2019), online at All Theses 3131, Clemson University [] at 41-42.

23. Olsen v Barrett, 2002 BCSC 877 at para 52.

24. Molson v Squamish Transfer Ltd (1969), 70 WWR 113 (BCSC) at 114.

25. Drivers can even be liable for property damage caused when they swerve to avoid killing a cat that runs in front of their car: Falkenham v Zwicker (1978), 32 NSR (2d) 199, 93 DLR (3d) 289 (SC).

26. Sutherland v Glasgow Corp, 1949 SLT 388 (2nd Div). There, the court was quite clear in articulating the choice as being between the life of the animal and mere minor injury to a human. Although the court found that the driver’s actions did save the dog’s life, it still held that he had made the wrong choice. He should have pursued the option that involved killing the dog but not bruising his human passenger. For a Canadian case where braking (not swerving) and coming to a stop to avoid killing a small animal was held to be negligent, see Gill v Bains, [1985] BCJ No 510, Vancouver Registry No B831070 (SC).

27. Canadian Pacific Ltd v Gill, [1973] SCR 654 at 665, Spence J.

28. Lewis Klar & Cameron Jefferies, Tort Law 6th ed (Thomson Reuters, 2017). For a sample of cases where this has been applied, see Ferguson Estate v MacLeod, [2000] PEIJ No 11, 2000 PESCTD 9; Butler v O’Brien, [1954] ILR at para 85, [1954] 34 MPR 121, 1954 CarswellNfld 16 (NS); Morton (Next friend of) v Sykes, [1951] OJ No 250, [1951] OWN 687, 1951 CarswellOnt 293 (High Ct); Wood v Paget, [1938] 3 WWR 33, [1938] 4 DLR 325 (BCCA).

29. Sturm et al v Gagne Gravel Co Ltd et al, (1966) 57 WWR 344 (Man QB) at para 16.

30. Something like this was the conclusion of the German Ethics Commission, supra note 10. Recommendation 7 states that AVs “must be programmed to accept damage to animals … if this means that personal injury [to humans] can be prevented.” One scholar who anticipated our fear that AVs might make things worse for nonhumans is Oliver Bendell. See Oliver Bendell, “Considerations About the Relationship between Animals and Machine Ethics” (2016) 31:1 AI & Society 103. But his reasoning (at 107) was that AVs aligned to economic interests would avoid swerving because they would prefer to damage animals rather than the car.

31. Christine Korsgaard, “Interacting with Animals: A Kantian Account” in Tom L Beauchamp & RG Frey, eds, The Oxford Handbook of Animal Ethics (Oxford University Press, 2011) 91.

32. Tom L Beauchamp, “Rights Theory and Animal Rights” in Beauchamp & Frey, ibid, 198.


33. David DeGrazia, Animal Rights: A Very Short Introduction (Oxford University Press, 2002).

34. The Utilitarian RG Frey accepted both that we have direct duties to other animals and that some humans did not enjoy a higher moral status than some other animals. See RG Frey, “Animals and Their Medical Use” in Andrew I Cohen & Christopher Heath Wellman, eds, Contemporary Debates in Applied Ethics (Blackwell Publishing Ltd, 2005) 91.

35. It may seem odd to use Kant here, particularly as his view is quite a dated one. We have two reasons for our choice: (1) As noted above, Kant provides a clear and well-known indirect duty view of animals. (2) Unlike contemporaries like Carruthers (Peter Carruthers, The Animals Issue: Moral theory in practice (Cambridge University Press, 1992)), he also evidences some surprising sensitivities to other animals despite their lack of moral standing. Of course, Kant is also a giant in theoretical ethics and so, despite the fact that Kant is long dead, his influence continues unabated.

36. Korsgaard, supra note 31.

37. Immanuel Kant, Lectures on Ethics, translated by Louis Infield (Hackett, 1963).

38. There is a risk of using the Lectures on Ethics as a source of Kant’s thought. These are not, on the whole, notes taken during his lectures. They also do not reflect his mature thought. (On this point see Alice Pinheiro Walla, Review of Kant’s Lectures on Ethics: A Critical Guide, Lara Denis & Oliver Sensen, eds, (2016), online at Notre Dame Philosophical Rev []. Nevertheless, the position on our indirect obligations to animals, taken in those notes (and we will limit our use of primary sources for Kant’s view on animals to Lectures on Ethics), is useful for our discussion as they reflect a reasonable view taken from that vantage point.

39. Kant, supra note 37.

40. Patrick Lin, “Why Ethics Matters for Autonomous Cars” in Markus Maurer et al, eds, Autonomous Driving: Technical, Legal and Social Aspects (Springer, 2016) at 69.

41. Eleonora Gullone, Animal Cruelty, Antisocial Behaviour, and Aggression: More than a Link (Palgrave Macmillan, 2012).

42. By “risky cruelty” we mean animal cruelty that significantly increases the risk of the relevant actor mistreating humans.

43. Jean Lian, “Silence on the Floor” (2013), online at OHS Canada [] and Jessica H Leibler, Patricia A Janulewicz & Melissa J Perry, “Prevalence of Serious Psychological Distress among Slaughterhouse Workers at a United States Beef Packing Plant” (2017) 57:1 Work 105.

44. Kant himself seemed to believe that butchering adversely affected the humanity of butchers. Kant, supra note 37.

45. Ibid at 241.


46. Ibid at 239-40.


47. Immanuel Kant, Grounding for the Metaphysics of Morals, translated by James W Ellington (Hackett, 1993) (original work published 1785).

48. Jonathon Gatehouse, “Human Activity Pushing Earth towards ‘Sixth Mass Species Extinction,’ Report Warns” (2018), online at CBC News [] and Isabelle Garretsen, “One Million Species Threatened with Extinction because of Humans” (2019), online at CNN [].

49. DeGrazia, supra note 33.

50. Andrew Fenton, “On the Need to Redress an Inadequacy in Animal Welfare Science: Toward an Internally Coherent Framework” (2012) 27:1 Biology & Philosophy 74.

51. David DeGrazia, “Moral Status As a Matter of Degree?” (2008) 46:2 Southern J Philosophy 181.

52. For evidence of this in the scientific use of animals see ibid.


53. E.g., EC, Commission Directive 2010/63/EU of 22 September 2010 on the protection of animals used for scientific purposes, [2010] OJ L 276/33.

54. Beauchamp, supra note 32 and Martha Nussbaum, “The Capabilities Approach and Animal Entitlements” in Beauchamp & Frey, supra note 31 at 228.

55. We are not suggesting that such an attitudinal change about first-to-be-used animals would be defensible.

56. Alastair Bland, “Will Driverless Cars Mean Less Roadkill?” (2015), online at Smithsonian Magazine []. See also Candice Gaukel Andrews, “Self-Driving Vehicles Could Save Animal Lives” (2018), online at Good Nature Travel [].

57. E.g., Gary Varner, “Environmental Ethics, Hunting, and the Place of Animals” in Tom L Beauchamp & RG Frey, eds, The Oxford Handbook of Animal Ethics (Oxford University Press, 2011).

58. Fenton, supra note 50.

59. Fenton, supra note 15.

60. Kristin Andrews et al, Chimpanzee Rights: The Philosophers’ Brief (Routledge, 2019).

61. Andrew Fenton & Syd Johnson, “Philosophers’ Brief on Chimpanzee Personhood” (2018), online at Impact Ethics [].

62. Kristin Andrews, “The Psychological Concept of ‘Person’” (2016) 10:17 Animal Sentience 1.

63. Rasmus Nielsen et al, “Tracing the Peopling of the World through Genomics” (2017) 541 Nature 302 and Viviane Slon et al (2018) “The Genome of the Offspring of a Neanderthal Mother and a Denisovan Father” 561 Nature 113.

64. Fenton, supra note 15.

Send article to Kindle

To send this article to your Kindle, first ensure is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle.

Note you can select to send to either the or variations. ‘’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Humane Driving
Available formats

Send article to Dropbox

To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Dropbox.

Humane Driving
Available formats

Send article to Google Drive

To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your <service> account. Find out more about sending content to Google Drive.

Humane Driving
Available formats

Reply to: Submit a response

Please enter your response.

Your details

Please enter a valid email address.

Conflicting interests

Do you have any conflicting interests? *