Cognitive Studies: Bulletin of the Japanese Cognitive Science Society
Online ISSN : 1881-5995
Print ISSN : 1341-7924
ISSN-L : 1341-7924
Feature Cognitive science of rationality
Trust calibration as rationality for human–AI cooperative decision making
Seiji Yamada
Author information
JOURNAL FREE ACCESS

2022 Volume 29 Issue 3 Pages 364-370

Details
Abstract

In this paper, we discuss AI’s rationality and explain the rationality of a human-AI system in our adaptive trust calibration. First, we describe AI’s rationality by introducing the formalization of reinforcement learning. Then we explain our adaptive trust calibration which has been developed for rational human–AI cooperative decision making. Safety and efficiency of human–AI collaboration often depend on how humans could appropriately calibrate their trust towards the AI agents. Over-trusting the autonomous system sometimes causes serious safety issues. Although many studies focused on the importance of system transparency in keeping proper trust calibration, the research in detecting and mitigating improper trust calibration remains very limited. To fill these research gaps, we propose a method of adaptive trust calibration that consists of a framework for detecting the inappropriate calibration status by monitoring the user’s reliance behavior and cognitive cues called “trust calibration cues” to prompt the user to execute trust calibration.

Content from these authors
© 2022 Japanese Cognitive Science Society
Previous article Next article
feedback
Top