主催: 人工知能学会
会議名: 第102回言語・音声理解と対話処理研究会
回次: 102
開催地: 国立国語研究所 講堂
開催日: 2024/11/28 - 2024/11/29
p. 160
In this demonstration, we present InteLLA, a multimodal dialogue system designed to assess language learners' proficiency through structured interviews and roleplay scenarios. InteLLA uses adaptive sampling strategies, dynamically adjusting question difficulty in real-time to assess both linguistic and interactional competence. The system aims to induce linguistic breakdowns to identify a user's upper proficiency limits and evaluate interactional skills such as turn-taking and topic management. Experimental results showed that InteLLA could elicit ratable speech samples comparable to those generated by experienced teachers, particularly in assessing interactional competence. Field testing with over 10,000 students further demonstrated the system's scalability and reliability. The authors concluded that InteLLA is an effective tool for large-scale language assessment, though future work will focus on enhancing real-time assessment accuracy and user engagement.