Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
38th (2024)
Session ID : 4T3-OS-6d-04
Conference information

Effects of Algorithm Aversion and Betrayal Aversion on Trust in AI
Hisashi TAKAGI*Yang LIKazunori TERADAKomori MASASHI
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Trust is a behavior in which a trustor trusts the decision of their gain to the trustee, expecting a high reward, with accepting the risk of betrayal. Previous studies have pointed out the existence of a betrayal aversion bias, in which trust is reduced by the task evaluation of risk due to the attribution of deceptive intention. On the other hand, the existence of an algorithm aversion bias, in which the decision-making process is avoided by algorithms rather than people, has also been pointed out. To investigate the effects of betrayal aversion and algorithm aversion on trust, we conducted an experiment in which type of trustee and trustee's computation were manipulated. Participants (n=284) played a trust game with (a) an intentional person, (b) an AI that makes decisions by algorithm, (c) a random decision person, or (d) an AI that makes decisions at random, with a known return rate of the trustee, and a lottery task structurally equivalent to the trust game. The results of the analysis using the minimum acceptable return probability required to trust as an indicator showed that neither differences in person/AI nor intentionality affected trust, and trust rates were higher in all cases than in the lottary task. This result suggests the existence of a cooperation overconfidence bias and an algorithm overconfidence bias, rather than a betrayal aversion or algorithm aversion bias.

Content from these authors
© 2024 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top