Can We Think with ChatGPT?: The Future of Critical Thinking and the Humanities in the Era of Generative AI

Abstract

The greatest danger posed by generative AIs such as ChatGPT is not that it will automate white-collar workers out of a job (Kevin Roose), but that it will decrease the ability of humans to think autonomously. This because it is extremely good at planing, decision-making, optimization, and, above all, writing. All these skills constitute the particular human cognitive ability that we call “thinking.” I ask: Can “great evil” (in Hannah Arendt’s sense) arise from human-machine cognitive distribution? And can a collaboration between industry and the humanities provide an antidote? For healthy thinking to happen, an internal dialogue is necessary as well as the ability to engage in moral considerations. My study formulates specific proposals on how to foster these skills: 1) Industry can train GenAI models to assume multiple personalities (heteronyms) and be able to coherently discuss with humans while defending contrasting views; this will help both humans and eventually AIs to build and maintain a robust internal dialogue; 2) such models can be trained on existing philosophical forms of dialogue and 3) literary narratives that will allow large language models to have access to moral judgments from the most diverse and subjective perspectives.

Presenters

Ana Ilievska
Postdoctoral Fellow and Lecturer / Senior Research Fellow, Department of French and Italian / Center for Science and Thought at the Dept of Philosophy, Stanford University / University of Bonn, Nordrhein-Westfalen, Germany

Details

Presentation Type

Paper Presentation in a Themed Session

Theme

2024 Special Focus—People, Education, and Technology for a Sustainable Future

KEYWORDS

GENERATIVE ARTIFICIAL INTELLIGENCE, CRITICAL THINKING, HUMANITIES, COGNITIVE AUTOMATION, MORAL JUDGMENT

Digital Media

This presenter hasn’t added media.
Request media and follow this presentation.