Transactions of the Virtual Reality Society of Japan
Online ISSN : 2423-9593
Print ISSN : 1344-011X
ISSN-L : 1344-011X
Predicting Praising Skills Using Multimodal Information in Remote Dialogue
Asahi OgushiToshiki OnishiRyo IshiiAtsushi FukayamaAkihiro Miyata
Author information
JOURNAL FREE ACCESS

2024 Volume 29 Issue 3 Pages 127-138

Details
Abstract

Praising behavior in remote dialogue is an important part of communication. However, the important behaviors for good praise have not been clarified. Based on this problem, we clarified the important behaviors for praising well in face-to-face dialogue. Recently, remote dialogue are widely used as a substitute for face-to-face dialogue. The behaviors perceived differently between face-to-face and remote dialogues. Therefore, the important behaviors for praising well may also differ between face-to-face and remote dialogues. In this paper, we clarify the important behaviors for praising well in remote dialogue. First, we construct machine learning models that predicts praising skills, the degree to which the speaker is able to praise others well, based on linguistic, acoustic, and visual behaviors. Second, we analyze the linguistic, acoustic and visual behaviors for praising well. The results show that the best performing model has F1 score of 0.555. This model used acoustic features of the praiser, and visual features of the receiver. Analysis of behavior during praising suggested that tone of voice is important for praising well in remote dialogue.

Content from these authors
© 2024 THE VIRTUAL REALITY SOCIETY OF JAPAN
Previous article Next article
feedback
Top