Proceedings of the Annual Conference of JSAI
Online ISSN : 2758-7347
36th (2022)
Session ID : 2G6-OS-18b-03
Conference information

Accent discrimination from speech-imagery EEG.
*Takuro FUKUDAShun SAWADAHidehumi OHMURAKouichi KATSURADAMotoharu YAMAOYurie IRIBERyo TAGUCHITsuneo NITTA
Author information
CONFERENCE PROCEEDINGS FREE ACCESS

Details
Abstract

Although analysis of speech imagery electroencephalogram (EEG) has been actively conducted, there have been reported few numbers of results that focus on pitch accent, which is a linguistic feature of imagined speech. In this report, we propose a complex cepstrum-based accent discrimination from speech-imagery EEG signals. We first created a database containing the intervals of imagined spoken syllables that is visually labeled from the line spectral patterns of EEG signals obtained after the pooling process of electrodes. Then, we construct an accent discriminator using the complex cepstrum calculated from the amplitude spectrum from the EEG signals during speech-imagery. In the discrimination process, the eigenspaces are designed for each accent from the training data. The results of experiments using the subspace method and the tensor product-based compound similarity method showed satisfactory scores in discriminating the different types of accents of imagined two-syllable speeches.

Content from these authors
© 2022 The Japanese Society for Artificial Intelligence
Previous article Next article
feedback
Top