Host: The Japanese Society for Artificial Intelligence
Name : The 102th SIG-SLUD
Number : 102
Location : [in Japanese]
Date : November 28, 2024 - November 29, 2024
Pages 160
In this demonstration, we present InteLLA, a multimodal dialogue system designed to assess language learners' proficiency through structured interviews and roleplay scenarios. InteLLA uses adaptive sampling strategies, dynamically adjusting question difficulty in real-time to assess both linguistic and interactional competence. The system aims to induce linguistic breakdowns to identify a user's upper proficiency limits and evaluate interactional skills such as turn-taking and topic management. Experimental results showed that InteLLA could elicit ratable speech samples comparable to those generated by experienced teachers, particularly in assessing interactional competence. Field testing with over 10,000 students further demonstrated the system's scalability and reliability. The authors concluded that InteLLA is an effective tool for large-scale language assessment, though future work will focus on enhancing real-time assessment accuracy and user engagement.