Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
39th (2025)
Session ID : 1E5-OS-3b-05
Conference information

Self-Disclosure in VTuber's Free Talk Stream: Extraction and Analysis with Large Language Models
*Kuon TANAKAShunsuke ITOHHaruka MATSUKURAYuji NOZAKIMaki SAKAMOTO
Author information
Keywords: VTuber, LLM, self-disclosure
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

In recent years, VTubers (Virtual YouTubers) have become more popular, with the number of VTubers exceeding 20,000. However, few methods find favorite VTubers, and understanding the appeal of a new VTuber takes time. This study aimed to facilitate the quick discovery of a VTubers' appeal by extracting self-disclosure from their chatting streams. First, we collected two chatting streams from each of the 96 randomly selected VTubers. Next, based on previous research and qualitative analysis by LLMs, we developed 31 self-disclosure items, including "reflection on experience" and "current goal." We then used GPT-4o-mini to classify whether transcriptions of the chat streams contained these self-disclosure items. We compared some of the results with human annotations for validation. As a result, over 80% of VTuber chatting streams contained self-disclosure. Additionally, while items related to “goal” and “VTuber activity” were extracted with high accuracy, items such as “interest” and “personality” had lower accuracy. In conclusion, LLM has made it possible to analyze VTuber chatting streams.

Content from these authors
© 2025 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top