Abstract
Intentionality is a central concept in philosophy that deals with the relationship between minds and their contents, or thoughts and objects they represent. Franz Brentano’s theory of intentionality states that there can be no mind where there are no representations, which means a relation between internal thoughts and external things they represent in the world. The rise of neural network-based computing over the last decade has had an impact on machine intentionalities and meaning representation. Embeddings play a significant role in this context, as these sets of fractional numbers can be used to represent various features within a network. Different types of networks have different embeddings that serve specific purposes. For example, variational autoencoders (VAEs) create very compact representations of data like images, text, or music using an encoder-decoder framework. Word2Vec and GANs are other examples of neural networks that use embeddings to represent words and images, respectively. However, the embedding type that stands out for its creative potential is CLIP (Contrastive Language-Image Pre-training). This model can encode both text and images on a shared space, allowing practitioners to evaluate how closely related visual and textual concepts are. An image embedding with values similar to a text embedding indicates that the visual and textual concepts are closely connected. The concept of dual representation in CLIP allows for strong connections between visual and textual languages, opening up new possibilities for creative arts like metaphors, poetics, and analogies.
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
2024 Special Focus—Images and Imaginaries from Artificial Intelligence
KEYWORDS
Intentionality, Representation, Neural networks, Embeddings, Visual Arts, Artificial Intelligence, Metaphor