Understanding and Integrity


You must sign in to view content.

Sign In

Sign In

Sign Up

Moderator
Lesley Model, Lecturer, BA Media and Communications, London Metropolitan University

Trust in Artificial Intelligence: Perceptions and Evaluations of Contexualized Artificial Intelligence

Paper Presentation in a Themed Session
Caja Carola Thimm  

There is broad understanding in human centered artificial intelligence (AI) research that one of the most important features of near-future AIs is their trustworthiness. Even more so, as many human-to-human contacts will be replaced by interactions with machines and, if those machines are not programmed to be friendly, culturally sensitive and norm based (“Social AI”) and don’t behave in a friendly manner (“Friendly AI”), they won’t be accepted. Based on the model of trust by Thiebes et al. (2021), which differentiates between institution based, situational and behavioral trust and individual intentions, we carried out two studies in order to follow up this model: (1) Online questionnaire (n=239), (2) Three focus groups studies (same/mixed sex groups, n=12). Results show that there are diverse concerns about risks, and many participants also have mixed opinions about fairness and usefulness of automated decision-making. But trust was high if the AI application was known, and if was even perceived as beneficial when its actions were expected to achieve the objectives. These results call for a situational and contextual approach on trust: in one situation humans trust, in others they might not. Interestingly, decisions taken automatically by an AI were often evaluated on par or even better than human experts for specific decisions. These studies point to an important differentiation on contextualized, social AI, which should be tested for trustworthiness in further research.

Artificial Interiors: Exploring Representation of the Seven Elements of Interior Design on Artificial Intelligence Platforms View Digital Media

Paper Presentation in a Themed Session
Jason Shields  

Emerging artificial intelligence (AI) technologies have begun to play a disruptive role in representing interior environments. These AI tools allow users to utilize artificial neural networks to generate and manipulate static spatial ideations. However, as AI-produced images of surreal and extraordinary interior environments become increasingly common in journalistic media and online, designers must question the tool's applicability in the discipline of interior design. What are the potential effects of this technology, and does the application provide a falsified representation of interior design's core ideologies and pedagogical foundations? This research seeks to determine if AI software can generate digital images of interior environments that accurately represent interior design's seven foundational elements: colour, form, light, line, pattern, texture, and space. Structured text prompts based on these elements are input into AI platforms such as Dall-E-2, Interior AI, Stable Diffusion, and Midjourney. The image outputs from each platform are visually analyzed using the aforementioned elements. The investigation argues that current AI image generation tools cannot accurately represent interior space or consistently introduce finer details related to form, texture, line, and lighting. However, the AI tools successfully recreated colour, pattern, and space in some instances. As journalistic media and digital media disciplines rely further on AI to ideate spatial representations, and as AI moves towards producing 3D objects and spaces, we must rigorously question to what extent using a tool lacking these core pedagogical principles means for the representation of interior environments.

Digital Media

Digital media is only available to registered participants.