Abstract
To what extent can advances in computational literary studies be used to understand what it means for a work of art to be ‘canonical’? This paper considers recent attempts to define literary value empirically, through the machine mapping of concepts like ‘popular’ and ‘prestigious’ (Stanford Literary Lab, 2018). Drawing on Frank Kermode (1975), who sought to develop an understanding of ‘the classic’ as a site of especial ‘interpretative over-determination’ – distinguishable from the ‘imperial classic’ of T.S. Eliot (1944) and Charles Sainte-Beuve (1850), for whom the classic endures through the accommodation of empire – I will that debates over the composition of the canon are really debates about how we establish value. Following, Ronald Dworkin (2011), who defends the holism of value, the paper argues that computational analysis cannot give an account of the canon because value in the humanities is not an empirical fact: as our understanding of value changes in general, so does the meaning of value itself. For this reason, the canon should be thought of as a site of what Mukherjee (2015) calls ‘contestation’, rather than establishable fact.
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
2019 Special Focus - The World 4.0: Convergence of Knowledges and Machines
KEYWORDS
Canon, Classic, Digital Humanities, Value, Literature
Digital Media
This presenter hasn’t added media.
Request media and follow this presentation.