Skip to content

What Motivates People to Trust ‘AI’ Systems?

What Motivates People to Trust ‘AI’ Systems?

Companies, organizations, and governments across the world are eager to employ so-called ‘AI’ (artificial intelligence) technology in a broad range of different products and systems. The promise of this cause célèbre is that the technologies offer increased automation, efficiency, and productivity – meanwhile, critics sound warnings of illusions of objectivity, pollution of our information ecosystems, and reproduction of biases and discriminatory outcomes. This paper explores patterns of motivation in the general population for trusting (or distrusting) ‘AI’ systems. Based on a survey with more than 450 respondents from more than 30 different countries (and about 3000 open text answers), this paper presents a qualitative analysis of current opinions and thoughts about ‘AI’ technology, focusing on reasons for trusting such systems. The different reasons are synthesized into four rationales (lines of reasoning): the Human favoritism rationale, the Black box rationale, the OPSEC rationale, and the ‘Wicked world, tame computers’ rationale. These rationales provide insights into human motivation for trusting ‘AI’ which could be relevant for developers and designers of such systems, as well as for scholars developing measures of trust in technological systems.

Subjects: Human-Computer Interaction (cs.HC)
Cite as: arXiv:2403.05957 [cs.HC]
(or arXiv:2403.05957v1 [cs.HC] for this version)

Posted on: April 11, 2024, 6:05 am Category: Uncategorized

0 Responses

Stay in touch with the conversation, subscribe to the RSS feed for comments on this post.