Deepfake Pornography: Attitudes and Behaviors across Seven Countries

Abstract

The great strides in artificial intelligence has also resulted in increasingly realistic involuntary synthetic pornographic imagery (ISPI, also known as deepfake porn). While technology companies have increasingly developed policies to address this content, regulations lag behind. There is very little research to indicate how the general public feels about this content, nor information around consumption and creation behaviors. This study aims to fill that gap by surveying over 10,000 participants across Spain, the Netherlands, USA, Australia, South Korea, Mexico, and Poland. As part of a larger study on the phenomenon of image-based abuse (revenge porn), participants were asked about their familiarity with deepfake pornography; their own behaviors related to deepfake pornography, and their attitudes towards whether those behaviors should be criminalized. Additionally, we report on the prevalence of broader victimization and perpetration of image-based abuse, including deepfake pornography. Findings provide insights into the size of this problem across a number of different countries, and help policy-making and technology companies (both those developing the technology, and those where the content may be hosted) recognize and address this problem.

Presenters

Rebecca Umbach
UXR, Trust & Safety, Google, California, United States

Details

Presentation Type

Paper Presentation in a Themed Session

Theme

2023 Special Focus—Who Can We Trust? Ethical and Responsible Artificial Intelligence in Digital Communication Systems

KEYWORDS

Image-Based Abuse, Generative AI, Deepfake Pornography, Involuntary Synthetic Pornographic Imagery

Digital Media

Videos

Deepfake Pornography: Attitudes And Behaviors Across Seven Countries (Embed)