Host: The Japanese Society for Artificial Intelligence
Name : The 38th Annual Conference of the Japanese Society for Artificial Intelligence
Number : 38
Location : [in Japanese]
Date : May 28, 2024 - May 31, 2024
As Artificial Intelligence (AI) achieves high predictive accuracy, its utilization in supporting human predictive tasks has advanced significantly. As AI becomes more advanced, humans are challenged to comprehend and retrace how the algorithm came to a result. Explainable AI (XAI) has been developed to bridge this gap by providing rational explanations to help comprehension. Despite this, it remains unclear what constitutes an effective explanation to foster trust in AI. This study focuses on two factors affecting AI trust, the importance of decision outcomes and decision-making content, to explore how the basis for human judgment without AI. Our findings suggest that the necessity of explanation for AI trust varies with the context of AI use, indicating that the explanation needed to gain human trust differs according to the scenario with AI.