In this paper, we hope to explain how theorizing about Trolley Cases is related to answering ethical questions facing the employees of car manufacturers. We canvas three important accounts of this relationship already defended in the literature. We first consider two instances of Trolley Optimism, views on which thinking about trolley cases bears in an important way on how autonomous vehicles (AVs) should be designed. A traditional form of Trolley Optimism sees Trolley Cases as structurally identical to real-world cases involving AVs and seeks to deploy traditional philosophical resources to inform the design of AVs. A second form, inherent in the MIT Moral Machine approach, seeks to use Trolley Cases to collect responses from a wide audience, aggregate that data, and then apply the insights gleaned from that data to enact our collective preferences in the design of self-driving cars. Trolley Pessimists are skeptical of the value of Trolley Cases, typically because either they doubt the value of thought experiments or think that AV crash scenarios are too dissimilar to Trolley Cases. We too think that deciding how to program AVs is importantly different than deciding what the best course of action in a Trolley Case is. But our Trolley Pessimism is grounded in the view that the machine learning systems that are the foundation of self-driving cars force us to adopt a paradigm on which it is choices about entire training sets that are subject to ethical evaluation, significantly diminishing the value of Trolley Cases.
Autonomous Vehicles, Trolley
2019 Special Focus: The Social Impact of AI: Policies and New Governance Models for Social Change
Paper Presentation in a Themed Session
Assistant Professor, Philosophy, Northeastern University
Lecturer, Philosophy, Harvard University, United States