Caring Control: Why and How We Misrepresent the Use of Facial Recognition in China

Abstract

Policymakers and scientists in facial recognition and Artificial Intelligence (AI) have often assumed that governments with poor human rights records use technological powers to monitor their citizens. The Global AI Index and the AI Global Surveillance Index (praising Ireland) include the premise that the citizens’ docile acceptance of facial recognition is due to the lack of public awareness in governmental surveillance, especially during the tightened control of the COVID-19 pandemic. These assumptions are highly questionable: one of the most striking characteristics of China’s fast development in this field is a nurtured sense that the government cares and can be trusted. Citizens trust because of the internal transparency about the abuses of AI. But who do they trust? The municipal government? The enterprises that power the AI apparatus? The various ministries of the state? These stakeholders have different roles to play in promoting a multi-layered discourse of “care.” This paper answers the following questions: How have we misrepresented AI in China? Why are these misrepresentations false? Why do we fall into them? What is at stake if we keep repeating this human-right narrative? The misrepresentation prevents concerned citizens outside China to understand the concerned citizens in China, overstates differences, and understates the shared stakes in the AI-driven economic growth.

Presenters

Xuenan Cao
Postdoc, MacMillan Center of International and Area Studies, Yale University, United States

Details

Presentation Type

Paper Presentation in a Themed Session

Theme

Media Literacies

KEYWORDS

FACIAL RECOGNITION, CHINA, AI SURVEILLANCE, CARE