Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
38th (2024)
Session ID : 2T5-OS-5b-03
Conference information

Does Context Affect the Rationales for Human Awareness? Towards Understanding Human Decision Criteria for Trustworty Explainable AI
*Mingzhe YANGRina KAGAWAYukino BABA
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

As Artificial Intelligence (AI) achieves high predictive accuracy, its utilization in supporting human predictive tasks has advanced significantly. As AI becomes more advanced, humans are challenged to comprehend and retrace how the algorithm came to a result. Explainable AI (XAI) has been developed to bridge this gap by providing rational explanations to help comprehension. Despite this, it remains unclear what constitutes an effective explanation to foster trust in AI. This study focuses on two factors affecting AI trust, the importance of decision outcomes and decision-making content, to explore how the basis for human judgment without AI. Our findings suggest that the necessity of explanation for AI trust varies with the context of AI use, indicating that the explanation needed to gain human trust differs according to the scenario with AI.

Content from these authors
© 2024 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top