Host: The Japanese Society for Artificial Intelligence
Name : The 33rd Annual Conference of the Japanese Society for Artificial Intelligence, 2019
Number : 33
Location : [in Japanese]
Date : June 04, 2019 - June 07, 2019
Poor trust calibration in human-AI collaboration often degrades the total system performance in terms of safety and efficiency. Existing studies have primarily examined the importance of system transparency in maintaining proper trust calibration, with little emphasis on how to detect over-trust and under-trust nor how to recover from them. With the goal of addressing these research gaps, we propose a novel method of adaptive trust calibration, which consists of a framework for detecting the status of calibration and cognitive cues called ``trust calibration cues''. Our framework and four types of trust calibration cues were evaluated in an online experiment with a drone simulator. The result showed that presenting the simple cues at the time of over-trust could significantly promote the trust calibration.