What Self-Driving Cars Do (and Don't) Tell US About Morality

Abstract

Self-driving cars will need to be programmed to react to emergency situations, sometimes involving potential loss of life. This will require human programming decisions about which human (and animal) lives should be given priority in such situations (the passengers’? a child’s? the largest number that can be saved?). This has sometimes been presented in the media as requiring programmers to make decisions about some traditional moral questions, such as the legitimacy of utilitarianism, doing harm versus allowing harm, the dignity of humanity, and the like. In this paper, I explore the idea that programming for self-driving cars is unlikely to provide new answers about the content of traditional moral theory. Nevertheless, it may provide insight into more meta-level questions, about what counts as a moral principle, what constitutes a moral agent, and what the role of principles is in moral thinking.

Presenters

Richard Dean
Professor, Philosophy, California State University Los Angeles, United States

Digital Media

This presenter hasn’t added media.
Request media and follow this presentation.