Trust in Artificial Intelligence: Perceptions and Evaluations of Contexualized Artificial Intelligence

Abstract

There is broad understanding in human centered artificial intelligence (AI) research that one of the most important features of near-future AIs is their trustworthiness. Even more so, as many human-to-human contacts will be replaced by interactions with machines and, if those machines are not programmed to be friendly, culturally sensitive and norm based (“Social AI”) and don’t behave in a friendly manner (“Friendly AI”), they won’t be accepted. Based on the model of trust by Thiebes et al. (2021), which differentiates between institution based, situational and behavioral trust and individual intentions, we carried out two studies in order to follow up this model: (1) Online questionnaire (n=239), (2) Three focus groups studies (same/mixed sex groups, n=12). Results show that there are diverse concerns about risks, and many participants also have mixed opinions about fairness and usefulness of automated decision-making. But trust was high if the AI application was known, and if was even perceived as beneficial when its actions were expected to achieve the objectives. These results call for a situational and contextual approach on trust: in one situation humans trust, in others they might not. Interestingly, decisions taken automatically by an AI were often evaluated on par or even better than human experts for specific decisions. These studies point to an important differentiation on contextualized, social AI, which should be tested for trustworthiness in further research.

Presenters

Caja Carola Thimm
Professor, Media Studies, University of Bonn, Germany, Germany

Digital Media

This presenter hasn’t added media.
Request media and follow this presentation.