Deep Learning, Fake Images and Visual Social Media

Abstract

Visual social media is attracting ever greater attention, particularly as visuality intersects with big data and artificial intelligence. Recently concerns have arisen that artificially generated visuals could supercharge dis- and misinformation online. Sitting at the intersection of artistic practice and social scientific research, this study explores questions around computer-generated images, aesthetics, fakery, and socio-political concerns about the rise of online visual communication. Employing deep learning techniques, we research about the nature of fake visual content, using Generative Adversarial Networks (GAN) to generate artificial images, and Convolutional Neural Networks (CNN) to try to distinguish real from fake. Furthermore, we use our CNN to scan over 6 million images collected from public Twitter posts to try to determine the proportion of fake images. How deep learning is integrated into online visual communication raises many questions: How is artificial visual content generated, what kind of ‘fake’ visuals are people sharing on social media, and will artificial intelligence supercharge visual fakery? These are questions asked by scholars in the humanities and social sciences, but also explored in artistic practice (particularly in computer art, as pioneered by Harold Cohen). The collaboration between artistic practice and social scientific research has proved invaluable in exploring the role of intelligent systems in shaping visual communication. It shows us that autonomous systems can generate photo-realistic fake images, but much less capable of determining which images are fake and with are real without systematic human guidance.

Digital Media

This presenter hasn’t added media.
Request media and follow this presentation.