Large Language Models and a Data Theory of Value: Towards a Political Economy of the AI Form and the Rhetoric of the AI-generated Image

Abstract

Hito Steyerl, in her essay “Common Sensing?” theorized Large Language Models (LLMs) as “impersonating fake totalities, based on the averaged mass of trawled data.” By accepting Steyerl’s premise, that LLMs aggregate massive totalities of data only to be divided into processed and consumable data outputs, this paper builds a working theory that situates the AI form as a technology extending from the essential economic structure of late capitalism. Beginning with Marx’s Capital, the analysis seeks to draw a concrete parallel between Steyerl’s averaged mass of trawled data, and the Marxian hypothesis that the commodity form “congeals human labor,” dividing labor into units of averaged labor time. Recontextualizing AI technology and its emergent media forms as extensions of the machinery of capitalism present in Marx’s theory of the commodity, this paper argues that contrary to popular belief, AI is simply the most recent elaboration of the present capitalist logic, not a new paradigmatic shift. What then of the AI-generated image? Continuing from the structural analysis of the political economy of large language models, the paper concludes by suggesting a semiotic reading of the AI-generated image rooted in the material mechanizations of the productive process from which the image originates.

Presenters

Jonah Henkle
Student, Master of Arts, New York University, New Jersey, United States

Details

Presentation Type

Paper Presentation in a Themed Session

Theme

2024 Special Focus—Images and Imaginaries from Artificial Intelligence

KEYWORDS

Large Language Models, AI-generated Images, Capitalism, Rhetoric, Cultural Studies