Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
33rd (2019)
Session ID : 4F2-OS-11a-02
Conference information

Estimating Verbal・Nonverbal Skills in Business Presentation
*Yagi YUTAROOkada SHOGOShiobara SHOTASugimura SOTA
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

This paper focuses on developing a model for estimating presentation skills of each participant from multimodal (verbal and nonverbal) features. For this purpose, we use a multimodal presentation dataset including audio signal data and body motion sensor data, text data of speech contents of participants observed in 58 presentation sessions. The dataset also includes the presentation skills of each participant, which is assessed by two external observers of the Human Resources Department. We extracted various kinds of features such as spoken uttetances, acoustic features, and the amount of body motion to estimate the presentation skills. We created a regression model to infer the level of presentation skills from these features using support vector regression to evaluate the estimation accuracy of the presentation skills. Experiment results show that the multimodal model achieved 0.59 in R2 as the regression accuracy of effective production elements.

Content from these authors
© 2019 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top