JSAI Technical Report, SIG-SLUD
Online ISSN : 2436-4576
Print ISSN : 0918-5682
102nd (Nov.2024)
Conference information

InteLLA: Multimodal Dialogue Agent System for Language Proficiency Assessment
Mao SAEKISadahiro YOSHIKAWAKevin HENGTaiki WATAIJun INAGAKIHiroaki TAKATSUMasaki EGUCHIShungo SUZUKIFuma KURATAMax THOMSONElliott BRUCEAnastasia RAMJAGWeronika DEMKOWDaniel TRAVINYoichi MATSUYAMA
Author information
CONFERENCE PROCEEDINGS RESTRICTED ACCESS

Pages 160

Details
Abstract

In this demonstration, we present InteLLA, a multimodal dialogue system designed to assess language learners' proficiency through structured interviews and roleplay scenarios. InteLLA uses adaptive sampling strategies, dynamically adjusting question difficulty in real-time to assess both linguistic and interactional competence. The system aims to induce linguistic breakdowns to identify a user's upper proficiency limits and evaluate interactional skills such as turn-taking and topic management. Experimental results showed that InteLLA could elicit ratable speech samples comparable to those generated by experienced teachers, particularly in assessing interactional competence. Field testing with over 10,000 students further demonstrated the system's scalability and reliability. The authors concluded that InteLLA is an effective tool for large-scale language assessment, though future work will focus on enhancing real-time assessment accuracy and user engagement.

Content from these authors
© 2024 The Japaense Society for Artificial Intelligence
Previous article Next article
feedback
Top